It’s usable and the security benefits are definitely important when working with multiple security domains (separate clients each with their own confidential data and third-party dependencies, where you don’t want one client’s malicious NPM dependency affecting the other).
However, there are cons. It’s only really usable in a stationary environment; it completely kills battery life and even basic tasks such as (non-HD) video display maxes out a single CPU core so it’s just not worth trying on a laptop. Hibernation doesn’t seem to be supported by default which becomes risky when combined with the extreme power usage.
I've been using this as a daily driver for at least 5 years now.
Only laptops so far, with 4+ cores and 32+GB RAM and 500G+ disk.
It was working fine on my Lenovo T470p, and it runs pretty sweet on Lenovo P14s. Except that suspend is not working. ( Hopefully is resolved soon ).
It's always a problem with battery, but with suspend working fine it's quite easy to get a solid 30 days uptime even though you move around. ~3h runtime with ~10 vms running.
I wouldn't say it's perfect, but I wouldn't choose anything else if I would do it all over. Totally worth the extreme learning curve ;-)
I may have to try this on my old T420 - will not be nearly as fast as your stuff but it does have 12GB of ram and a decent SSD. I have the dedicated Nvidia graphics card though which sounds like it may cause problems.....
I think that it would run fantastic, my Lenovo had dedicated Nvidia graphics and it worked like a charm. The AMD APU (Vega graphics) have been a horror show.
I used it on a 2013 i7 laptop some years ago successfully as my daily driver for a while by buying a couple extended capacity batteries and swapping them out when they were spent. Could get work done on the beach like that.
Now you can't find that kind of computer with swappable batteries anymore and I have work in 3D to do, I don't trust Intel anymore and try to buy AMD, so I'm stuck with windows haha how time have changed.
I'd buy another box capable of running this if I needed a VM lab type setup again. Very cool for that.
A Usb-c battery with 60W output would certainly extend a modern laptop even if it won't be able to fully supply the power requirment for a workstation-type laptop.
You can drastically decrease the memory footprint if you use minimal templates: https://www.qubes-os.org/doc/templates/minimal. But even with normal templates, one can run several VMs, and it's much more secure (and even convenient) than an ordinary OS.
sys-net, sys-firewall and other administrative vms should slowly migrate to unikernels instead of running linux, which should help with ram usage. The mirage.io project seems to build a couple qubes vms, for example https://github.com/mirage/qubes-mirage-firewall is a firewall which they indicate to give 64Mb of ram.
edit: maybe i'm being a bit optimistic for sys-net, which is the vm hosting the driver for the network card: these drivers are included in the linux tree and would need to be extracted and packaged into an unikernel. But for every non-driver vm it "should be easy" to get an unikernel implementation (drivers for paravirtual devices are easy to write).
I use it on my Librem-15 v3. I can confirm that it's a PITA as far as performance. One "nice" thing is that if a process makes your VM unresponsive, it's just about always contained to that VM. You can kill it and restart it and not interrupt anything else.
The other problem is memory. I have to decide to between Signal Desktop and development sometimes.
But it's my main machine for what I care. I love the peace of mind running random things if the need comes up. For more intensive stuff, media games etc, I have a previous machine running Ubuntu.
> if a process makes your VM unresponsive, it's just about always contained to that VM.
I still wonder why kernels can't just handle this properly.
I've seen it in both Windows and Linux systems, something takes all the CPU or I/O or RAM, and the UI is so starved that you can't kill it.
Shouldn't that already be handled by things like virtual memory, and the kernel scheduler? Why do modern OSes, that have to protect against such complicated attacks like Spectre, struggle to do what the very first multi-tasking kernels promised before I was born?
Windows has a single official UI and a low level handling of a key press to bring up an element that must ideally remain responsive.
Linux has many and a set of low level responses to keybondings that must remain responsive that don't have a UI nor ability to kill individual applications.
To be fair in current systems Linux UIs normally remain responsive until memory exhaustion which is handled very badly. You will probably reboot before the oom killer assassinates the offender. The fix is a user space daemon like earlyoom which activates at a configurable level and targeting rather than absolute exhaustion when even killing the offender is challenging and usage has been hard for tens of minutes.
You or your environment can also set your UI process to a higher priority as far as processing and io.
If your UI doesn't remain responsive it's because your distro isn't using existing tools to achieve this.
If you are at the physiacl keyboard you can run the oom killer via the sysrq key. I would say that the kernel oom killer is quite usable since the last few years but is triggered too late - especially for applications like Firefox that handle malloc failures gracefully.
oomkiller wouldn't help if there's an I/O overload (good old friend Linux kernel bug #12309) since memory usage is fine, just your system can't read anything from network or storage.
>The magic SysRq key is a key combination understood by the Linux kernel, which allows the user to perform various low-level commands regardless of the system's state. It is often used to recover from freezes, or to reboot a computer without corrupting the filesystem.[1] Its effect is similar to the computer's hardware reset button (or power switch) but with many more options and much more control.
Not really. The UI will still be unresponsive and the only thing you can do is reboot the system safely using Magic SysRq. What OP is asking about is why the UI is unresponsive in the first place.
When I ran memory-intense tests on low ram, I had the sysrq sequence for the oom-killer at my fingertips. When the system would start thrashing, it was a quick way back to safety.
It's levels of abstraction. The kernel does not know that swapping out X11/window-manager/gnome-terminal/whatever will make your system unusable.
There is mlockall() but it's a loaded footgun by default, and is rlimit-ed by default because it's not something you want users on a multiuser system to be able to do willy-nilly.
The task manager in Windows has a higher priority than everything else if I recall correctly. Its not pure ram or cpu usage that causes you to be unable to reach it but waiting for blocking sys calls I'd suspect.
Microkernel vs monolith has very little relation to resource (time and space) allocation, other than the fact that a microkernel is going to have an easier time with preventing runaway space usage of kernel services.
I've been using Qubes for over five years now as my work/coding machine. A couple notes if anyone is curious.
As for performance it is pretty good for what it is. My laptop is too old to do much of anything serious anyways, 2011 x220, but with 16gb RAM I don't really have issues with day-to-day use. I can't play full screen youtube which in other Linux OSes will work so there is something about that pipeline that isn't as performant.
My usual use is running emacs and SBCL, no issues there. I give my work VM around 8gb of RAM and that is more than enough.
One nice thing I have done repeatedly, I have written up a server in my work vm, then cloned the vm with all my code in /home, then reduced the ram on the clone to just enough to run some code as a server, and tested it. Once the code is nice and I would then spin up a fresh VM from the same template but without any of the /home files to run it from, copy the right folder over, and run it for a while in long term sort of testing.
As someone who runs VMs randomly for various reasons it is nice to have them integrated into the machine. Before this I was using a Linux box with KVMs and working from within a VM anyways. Having all the VMs update with only a couple clicks is very convinient, as is making backups of them. Having them all display windows within one desktop enviornment is fantastic for such a small screen.
Only annoyance is having a couple extra clicks to pull the clipboard from one VM to another but I got used to it and it doesn't bother me anymore.
> Only annoyance is having a couple extra clicks to pull the clipboard from one VM to another but I got used to it and it doesn't bother me anymore.
When saying clicks here I assume you do the copy dance? ( ctrl+insert, ctrl+shift+c, ctrl+shift+v, shift+insert ) It sticks.. nowadays I tend to do that even in Windows.
The issue is likely that in Qubes OS you don't have hardware acceleration. The bottleneck might very well be single thread performance, though your tips are worth a try.
For me it depends on the specifics of the video, usually ~10 Mbit/s 1080p x264 and YouTube at 1080p is mostly ok, but I don't even need to try 2160p content. (i5-9500T)
Overall an impressive and surprisingly well functioning software, given what it does.
Unfortunately - which I didn't write in the post yet, should do - is that I felt I had to switch back to a normal distro again because of a combination of a couple of annoyances:
- The need to manually redirect the USB-keyboard to each qube
- The fact that I couldn't get Debian-based Qubes to respect the font size settings
- The overall relative slowness of the system, causing some already slow software to be even slower
- The last straw: Inability to run Virtualbox VMs in Qubes (which I use to create some course content etc).
> The need to manually redirect the USB-keyboard to each qube
Why did you have to do this? In order to type in a qube? My usb keyboard just stays connected to the usb qube(sys-usb) and everything seems to work. I have never changed the qube my usb keyboard is connected to.
If the signals are going into sys-usb you must be piping them somewhere. For me and probably the GP, there's a "systray" (do they still call it that?) menu for connecting devices to things, and I have to direct my keyboard to a chosen VM that way (I have the added annoyance of having to re-set Dvorak each time; there's a bug somewhere). Maybe for you it's piping straight to dom0?
I recall at the beginning there was some security question you're asked related to this. Maybe you answered differently than us.
Exactly, the "systray" thing that is needed to switch the keyboard betweeen qubes. Dom0 would not even be accessible from USB-keyboards, for USB-related security reasons.
I use the latest version of qubes and sys-usb and I don't ever reassign my usb keyboard(I do for other usb devices.) It sounds like the person from the top comment just configured their system in an annoying way.
Some time ago I suggested to integrate [1][2] ReactOS with QubesOS to run Windows programs in secure domains - it's problematic to do properly with the real Windows.
Ah I see. It’s because they based their OS around windows 7 (up to) and so it is possible for newer programs to use newer apis and ReactOS wouldn’t be forward compatible. That sucks but given that information just installing Windows is the only proper solution.
Would you like to share more about your experience? How resource-intensive is it, what you do, what would a normal workflow look like with it compared to, say, a Fedora desktop?
If your work requires GPU acceleration, then you're out of luck [0]. Normal apps like a browser, LibreOffice work fine. Hardware virtualization is extremely secure [1] and also fast. Everything which works on Linux should work on Qubes, including drivers (which are usually isolated in their own VMs). You can also run several different Linux flavors simultaneously [2] and Windows [3], too. The downside is that you really need a lot of RAM for that (unless you try to minimize your VMs [4], which is an advanced feature). I have 32 GB, and I never run out of RAM.
I love that you can explicitly compartmentalize your digital live into independent VMs with a great unified interface. I have "work" VM which contains all work stuff and "personal" VM for personal things. More thoughts from me: https://forum.qubes-os.org/t/how-to-pitch-qubes-os/4499/15.
> Without GPU acceleration, a browser won't work fine
Disagreed - it does work fine. Maybe there's specific niche features that don't work but I have yet to come across them. Only problem is video decoding which works but uses lots of CPU so terrible for laptop battery life.
No, that is not what is meant by GPU virtualization. GPU virtualization is the GPU equivalent of Intel's VT and AMD's AMD-V, it's virtualization supported by the hardware which allows the GPU to be used by multiple VMs as though they were exclusive.
For some reason the GPU manufacturers have seen fit to limit this to expensive professional grade GPUs, but hopefully we'll see it in consumer hardware soon. Well, except I think Intel already has it in consumer chips, it's just that their GPUs are kinda shit so no one really cares.
If you mean SR-IOV, that basically splits one PCI slot into several slots, which you then passthrough to the guest, which means in theory it should be supported. At least that's how the networks are treated, sadly I have not yet been able to utilize it on GPUs :)
I like the idea of unifying configuration like that. Qubes does have templates but there are per-vm changes I want to make on top of them which adds a little more configuration work. NBD but not that smooth. Nix seems like a good fit for this problem, it's crossed my mind before. One could install everything on the "template" level safely (in theory?) and only run them in the VMs. I've never used Nix and I'd like to give it a try at some point.
HOWEVER, I like that Qubes has Debian. I'm used to it, it's predictable, etc etc. Some of my work basically even depends on it. With Spectrum I'd have to be stuck with the quirks of Nix, which I understand are less friendly than Debian. If I have the option of falling back to the equivalent of a Debian template, I'd likely switch.
Yeah I remember I tried looking into this and I gave up on it. Iirc I couldn't understand exactly how it worked, and/or it seemed like a large time investment. Not to mention how I manage it in a repository if I can't safely move anything in and out of dom0?
Qubes I think could benefit from more resources for ramp-up documentation.
I once had a bunch of reasons to believe that my main linux machine had been compromised by a (totalitarian) state actor for months. As soon as I googled "qubes", all input devices froze. I went and bought a new machine. Dissent is costly. :-)
P.S. Saw that same "frozen" behavior a few years earlier when trying to wipe drives from a live OS.
Happily using it for a few years now, though on a beefy desktop machine. Stability issues are limited to a firefox issue on the fedora Domains and some hickups with dual monitors on startup.
In short: we believe the Xen architecture allows for the creation of more secure systems (i.e. with a much smaller TCB, which translates to a smaller attack surface). We discuss this in much greater depth in our Architecture Specification document:
One of the major issues that I ran into is that the proprietary nvidia driver for desktop graphics cards and a xen dom0 hypervisor do NOT coexist well on the same machine.
Not really qubes specific, you can see the same if you take an ordinary debian-stable desktop linux system with nvidia graphics card and install the xen packages, then attempt to boot into a xen enabled kernel.
Honestly It's not a "reasonably secure OS" but an "absurd OS", absurd because for safety we have since few decades a very lightweight and very effective solutions: Plan 9 namespaces.
Actual older than Plan 9 but still alive OSes have done limited and limiting choices but many have something "somewhat equivalent", for instance GNU/Linux cgroups (see FireJail, BubbleWrap etc) or FreeBSD Capsicum. Choosing anything heavyweight is a nonsense.
Linux has definitely caught up on all of the features the wider industry deemed essential or nice to have. There may be some things missing but that is due to little demand outside of perhaps hobbyists.
> Linux has definitely caught up on all of the features the wider industry deemed essential or nice to have.
That's the issue: past IT was made human-centric, the Desktop as the center of the digital World, the humans as someone who bend his/her desktop to his/her needs and desires, with a network to communicate with other humans. Modern IT is "big player centric" and evolve just for their own needs and desires witch happen to be far from all the rest of humans.
Did you remember the big push toward full-stack virtualization (on x86) not so many years ago? Who really need it? In most cases those solutions are just ways to sell hw. Such push turn out to be unsustainable on x86 and so the container era was born, again who need it? Oh, a cloud provider that sell VPS yes, it need both full stack virtualization and various paravirtualization solutions, the rest of the world have no benefit running ks at home, often on a single physical machine. Snap/Flatpack/AppImage? Same story they serve the purpose of giving distro and community independence to commercial players but who need them?
All "modern" IT is prehistoric* respect of original Xerox/Symbolics and even AT&T IT, but sold as new not to improve our life but against our interest giving us just some crumbs and lock-in for the sake of few big players. Those in the FLOSS world who follow the trend are actually workers for free against their own interests.
The demand is "little" just because ignorance is high. And that's a classic in all society, people who know, people with culture, are always a minority, but that's does not means their "desire" are minor, they just know their interest others do not but would benefit equally. And that's why FLOSS should be mandatory and universities MUST be public and well founded to drive the research ahead of the private sector that can only pick some research to implement and sell the outcome not drive the society toward a devastating path.
Hmm I’m not sure I disagree with your premise but I do disagree with the bit about virtualization. It was pushed by corporations (and things like docker) because they do provide a quality of life I never had before. Im talking just development at that, I don’t keep a local Linux server anymore (I was a dialy user for 15 years) because it’s all just a docker compose away.
I think it definitely had its place before containerization but that is when it took off everywhere. It wasn’t a single push it was a years in the making process.
That's because you know "the new classic" and "the containers" not something else. If you knew NixOS or Guix System you you knew that for far less you can get far more.
A small example: IoT is now a must for modern houses with p.v. etc, Home Assistant is the most well known FLOSS solution. They suggest to deploy it via a docker image, so you need few Gb on disk just for it, relative ram etc. If you deploy via pip is just 321Mb, nothing else. Actually many system package managers support pip integration.
Not only, my entire home infra configuration is an org-mode file, easy to read, share, move just ~2000 lines, with a ks infra? I probably need around 4-5 time the lines in crappy YAML with constant babysitting to keep anything up to date. My actual infra is just two desktop, a laptop, a homeserver and few IoT devices (p.v. system, VMC controlled via ModBUS interface from the homeserver, few others goodies). The sole crappy YAML config for HA is four time* the entire NixOS infra config.
In a single click (on an org-mode link to tangle the config and on org-babel to run terminator and a script inside it) from one desktop or server I can generate a new custom NixOS iso for any other system, copy it to a tftp share for boot or on a ventoy-managed usb stick/ssd and manually boot the target machine. That machine will became the exact functional copy of the original one. With docker and classic distro? Well RH have kickstart is a bit less nightmarish than preseed and it's also limited, build a custom iso is a long process anyway, for Debian based is even longer. I do not know for Arch and co. A custom NixOS iso is just a single (or more, if I want them split) file passed to nix-build '<nixpkgs/nixos>' -A config.system.build.isoImage -I nixos-config=isoconfig.nix for Guix is just slightly longer.
With Plan 9 well, I do not even need an ISO since the infra is also the network... Countless of services and relevant network protocols are meaningless in a live plan 9, for instance sending emails potentially do not demand SMTP, the sender MUA just mount the network share of the recipient and save a file there. All is built-in in the system. Reading a website? The same: mount the exposed filesystem and open documents with your favorite viewer. Want something on another machine? Just open their relevant graphic display and there you go, no need for anydesk/bomgar/teamviewer/citrix/guacamole/*
But I can add more: no need for big cloud infra. These days we have enough bandwidth and computing power to have essentially all the real redundancies we need at home, that scale with the scale of the owner.
I think they mean the networking (everything is a file on the network.) This wasn’t adopted but Fuse etc have brought that functionality to Linux. If you really want to model plan9 on Linux there’s an app for that that runs atop Linux.
Heaven knows what someone conflating cgroups and namespaces means in connexion with Qubes. Anyway, if you want to know what I mean, read the paper "Security in Plan 9". "Linux" is irrelevant, and the various Plan 9 stuff-on-Unix efforts surely aren't going to improve the security of the OS.
You are the one who mentioned security, not the other user. My point was I don’t think they were referring to security as Plan9s most famous features very much have made their way into every major OS out there.
I was going by the top of the thread and choosing to assume the rest wasn't just non sequitor. I don't actually see all resources in GNU/Linux (for instance) available for me to mount remotely into my namespace via a uniform protocol.
Well, unix is another really bad OS compared to it's historical predecessors: at first they decide for a bad programming language to need less hw horsepower and separate that cheap language from the user language (C for the system, for "complex" things, shell scripts for the end user), for equal reasons they decide that's no need for GUIs, while far before unix we have had GUIs, touch monitor, even the world first video-conference with screen sharing in LAN (the so called Mother of all the Demos, in 1968 [1] then they realize that's was not that good and graphic systems start to appear on Unix, far limited, complex, that completely violate unix principles since for GUIs there were no IPCs, classic PostScript GUIs do support some user-programming but not really something like classic systems, CDE support a certain integration but again nothing like classic systems.
Since them all "modern" systems keep rediscovering in limited, limited and bug ridden ways what historical systems have done far better decades before...
I think many should just see classic advertisement like https://youtu.be/M0zgj2p7Ww4 than see it's date and where we are today...
It's not only security it's the overall design. In the past hw resources was limited an so hacks and slowness were common, hw itself being "in a pioneering phase" was full of hacks and ugliness but evolving those systems would have led us too the moon while we are still in the middle age...
Can anyone compare Qubes and Genode from a security or usability point of view? Genode now advertises hardware acceleration (on Intel and Vivante GPUs only) in contrast to Qubes, per discussion here.
I have encountered people who use Qubes as a daily-driver while the same is not true of Genode. That might say more about my circle of friends or the popularity than usability, bit I suspect it's indicative...
That is on purpose: The mission statement is supposed to be tongue-in-cheek as a nod to the complexity and difficult tradeoffs in anything security related.
It’s usable and the security benefits are definitely important when working with multiple security domains (separate clients each with their own confidential data and third-party dependencies, where you don’t want one client’s malicious NPM dependency affecting the other).
However, there are cons. It’s only really usable in a stationary environment; it completely kills battery life and even basic tasks such as (non-HD) video display maxes out a single CPU core so it’s just not worth trying on a laptop. Hibernation doesn’t seem to be supported by default which becomes risky when combined with the extreme power usage.