Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work in a company with over 40 years old and is incredible to see some very old software still functioning :), lots of orphaned projects that keep rocking forever.

Some engineers who have been in the company for decades usually say how a lot of our legacy codebase was cutting edge at one point and how a lot of their peers would have written it differently if they knew the software would still be in use decades to come. Safer, proven, simpler constructs with minimum dependencies seems to be the way.

How would you write your code today if you knew it would of been your last commit and still in use in 30 years ?



Archive your build chain - the version of the compiler, linker, libraries, and any other tools that you use to produce the executable. Archive the version of the OS they run on, too.


Just keep bootable OS drive image for every project (set of projects) which builds offline. Make sure to also download platform docs, dependency sources, manuals - git clone, javadocs, ruby ri/rdoc so on. Even keep IDE set up there.

I keep that habit currently by separating work with virtual machines. Storage is cheap and I can come back to my project tomorrow or in 2050 with amd64 emulator. It is also easy to backup or archive it - just rsync the image to NAS or burn it on DVD.


> Just keep bootable OS drive image for every project (set of projects) which builds offline.

Those images don't stay bootable, for different reasons:

* Media changes - Try booting from your tape, or your floppy disk set. * Unsupported new hardware - Remember how your Linux boot partition needed to be at the beginning of the HDD? Or how Windows setup would not recognize SATA drives unless you added drivers from a special floppy disk? * It boots, then gets stuck or goes blank on you - display driver issues being a common cause of this.


You are right. Solution: keep your boots simple and documented. Try them from time to time too.

I assume some common formats do not change, but it is good to keep some side-notes how to run the thing and what hardware to emulate.

I use Linux because it is open source and boots with broadest hardware range possible - I am sure it will run on emulator of 2010-most popular PC platform in few decades.


As an alternative to archiving the build chain, consider documenting a reproducible build from widely used components.

The risk is that the components of the reproducible build may no longer be available 50 years from now. But rolling your own archival is not bulletproof either, for the same reasons that untested backups aren't bulletproof.


The idea is that the components will be available 50 years from now, because you make sure you have them. All of them.

My answer to your last paragraph is: Test your backups! If you don't, then you don't actually have backups; you just have a nice dream.


I worked at a company that archived everything in version control, including checking in snapshots of the exact Visual Studio version used to build.


> How would you write your code today if you knew it would of been your last commit and still in use in 30 years ?

Generally: minimize dependencies. External library or API dependencies? Versions can drift, the system can change out from under you in incompatible ways. That goes for the OS, too, of course. Data dependency that you aren't 100% in control of? Same. All are forms of state, really. Code of the form "take thing, do thing, return thing, halt" (functional, if you like—describes an awful lot of your standard unixy command line tools) is practically eternal if statically compiled, as long as you can execute the binary. Longer, if the code and compiler are available.


> Generally: minimize dependencies.

This. It doesn't mean go overboard with NIH, but you have to evaluate and select your dependencies judiciously. It's not about developer productivity with these types of products.

Also, make as much of your program configurable as possible so you can tweak things out in the field. For example, if you have a correlation timeout. Make that configurable. But don't go overboard with that either. :)


Another aspect of this is pick dependencies that are well encapsulated (so if you need to change them or update them it's generally easy to).

Of course, this is just a good choice regardless. Still, it shocks me how often people will choose libraries and frameworks that require very opinionated structure on large swathes of code, rather than having well defined minimal touchpoints.


And vendor your dependencies, archiving tested and blessed snapshots in your version control instead of pulling them live from GitHub, NPM, or BinTray.


> How would you write your code today if you knew it would of been your last commit and still in use in 30 years ?

Separate the engine from the I/O and user interface. This is also the key to porting it to different systems.

For example, if your engine #include windows.h or stdio.h, you're doing it wrong.


<stdio.h> is part of the C standard library. It's been formally available since 1989 (if C can be used at all), and is one of the headers most likely to be available 30 years from now.


True, but if your host has a GUI interface, writing to stdout will not work.


How would you write your code today if you knew it would of been your last commit and still in use in ... ?

Scribed in gold with a Rosetta code and complete instructions to recreate the computer it runs on. That ought to last a while.


I think virtualization and containers might be part of the answer.

Or write everything in lisp.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: