Coincidentally, I just used a veritable litany of bind mounts this week. I did not attempt to hide anything, though one of those bind mounts exists in order to fake something. I needed this to be able to properly run some x86 software under Rosetta 2 in an arm64 Ubuntu VM.
I wasn't happy with how handling packages for multiple architectures seems to work in Ubuntu/Debian: It looks like you basically have to choose what package you want to install with each architecture, because if you try to install a package for both architectures, they almost always conflict with each other (trying to occupy the same files).
So instead, I made an x86 chroot. I used "debootstrap" to populate the x86 chroot with an x86 Ubuntu base system, and schroot so that a regular non-root user could use that chroot. That way, I can install any x86 package I want without conflicting with the surrounding VM by just chroot'ing into it and regularly typing e.g. "apt install emacs".
But both because a lot of software needs it, and for interop with the surrounding VM (unix domain sockets etc.), I had to create a bunch of bind mounts to bring shared "state" directories into the x86 chroot:
mount --make-private --bind /proc /data/x86/proc
mount --make-private --bind /proc /data/x86/sys
...
I did this for at least /proc, /sys, /run, /dev, /dev/pts and /home. Depending on how much you want to bring the environments together, you could also add /run, /tmp, and others.
I also added bind mounts for individual files:
mount --make-private --bind /etc/passwd /data/x86/etc/passwd
mount --make-private --bind /etc/shadow /data/x86/etc/shadow
mount --make-private --bind /etc/group /data/x86/etc/group
so that my x86 environment has the same user/group database.
Finally, and I think this is the most interesting piece, some x86 software I tried to run did not work because it tried to parse /proc/cpuinfo for some x86 CPU features. Rosetta 2 implements those, but of course /proc/cpuinfo in the system describes the actual (virtualized) ARM CPUs, which is of course not what x86 software expects.
So, I crafted my own fake cpuinfo text file that looked roughly like it would look in a real x86 environment (I did not put much effort in it, just copied an output from a random real x86 machine and adjusted the number of CPUs), and bind mounted that into the x86 /proc overlay:
mount --make-private --bind /data/fake-x86-cpuinfo.txt /data/x86/proc/cpuinfo
Now, /proc/cpuinfo inside the chroot (so, /data/x86/proc/cpuinfo) would give the fake cpuinfo and make the x86 software that parses it happy. This is also where the --make-private that I applied to /data/x86/proc earlier becomes important: Without that, that last command would not just overlay /data/x86/proc/cpuinfo, but also /proc/cpuinfo itself, now in turn making arm64 software potentially unhappy. With --make-private on the /proc bind mount, it effectively becomes a separate filesystem in that regard (only).
Finally, the biggest hurdle I had was getting systemd to properly mount all of these in the right order at startup. systemd parallelizes the mounts in fstab (and mounts generally). But if e.g. the /data/x86/proc mount does not happen after both /proc and /data/x86 have been mounted already, you effectively get an empty directory (you can easily work out yourself why that results in all improper cases). This was even more complicated because I use ZFS, and so /data/x86 gets mounted by zfs.mount-service. After much fiddling, I gave up. No combination of "x-systemd.requires=<mountpoint>", "x-systemd.after=zfs-mount.service" and whatever else in the fstab options really fully did the right thing.
I resorted to having a shell script that just runs the bind mounts in the correct order.
I wasn't happy with how handling packages for multiple architectures seems to work in Ubuntu/Debian: It looks like you basically have to choose what package you want to install with each architecture, because if you try to install a package for both architectures, they almost always conflict with each other (trying to occupy the same files).
So instead, I made an x86 chroot. I used "debootstrap" to populate the x86 chroot with an x86 Ubuntu base system, and schroot so that a regular non-root user could use that chroot. That way, I can install any x86 package I want without conflicting with the surrounding VM by just chroot'ing into it and regularly typing e.g. "apt install emacs".
But both because a lot of software needs it, and for interop with the surrounding VM (unix domain sockets etc.), I had to create a bunch of bind mounts to bring shared "state" directories into the x86 chroot:
I did this for at least /proc, /sys, /run, /dev, /dev/pts and /home. Depending on how much you want to bring the environments together, you could also add /run, /tmp, and others.I also added bind mounts for individual files:
so that my x86 environment has the same user/group database.Finally, and I think this is the most interesting piece, some x86 software I tried to run did not work because it tried to parse /proc/cpuinfo for some x86 CPU features. Rosetta 2 implements those, but of course /proc/cpuinfo in the system describes the actual (virtualized) ARM CPUs, which is of course not what x86 software expects.
So, I crafted my own fake cpuinfo text file that looked roughly like it would look in a real x86 environment (I did not put much effort in it, just copied an output from a random real x86 machine and adjusted the number of CPUs), and bind mounted that into the x86 /proc overlay:
Now, /proc/cpuinfo inside the chroot (so, /data/x86/proc/cpuinfo) would give the fake cpuinfo and make the x86 software that parses it happy. This is also where the --make-private that I applied to /data/x86/proc earlier becomes important: Without that, that last command would not just overlay /data/x86/proc/cpuinfo, but also /proc/cpuinfo itself, now in turn making arm64 software potentially unhappy. With --make-private on the /proc bind mount, it effectively becomes a separate filesystem in that regard (only).Finally, the biggest hurdle I had was getting systemd to properly mount all of these in the right order at startup. systemd parallelizes the mounts in fstab (and mounts generally). But if e.g. the /data/x86/proc mount does not happen after both /proc and /data/x86 have been mounted already, you effectively get an empty directory (you can easily work out yourself why that results in all improper cases). This was even more complicated because I use ZFS, and so /data/x86 gets mounted by zfs.mount-service. After much fiddling, I gave up. No combination of "x-systemd.requires=<mountpoint>", "x-systemd.after=zfs-mount.service" and whatever else in the fstab options really fully did the right thing.
I resorted to having a shell script that just runs the bind mounts in the correct order.