What is the overhead on a FUSE filesystem compared to being implemented in the kernel? Could something like eBPF be used to make a faster FUSE-like filesystem driver?
> What is the overhead on a FUSE filesystem compared to being implemented in the kernel?
The overhead is quite high, because of the additional context switching and copying of data between user and kernel space.
> Could something like eBPF be used to make a faster FUSE-like filesystem driver?
eBPF can't really change any of the problems I noted above. To improve performance one would need to change how the interface between kernel and user space part of FUSE filesystem works to make it more efficient.
That said FUSE support for io_uring, which got merged recently in Linux 6.14, has a potential there, see:
There is considerable overhead of the user space <> kernel <> userspace switches, you can see similar with something like Wireguard if you compare the performance of its go client Vs the kernel driver.
Some fuse drivers can avoid the overhead by letting the kernel know that the backing resource of a fuse filesystem can be handled by the kernel (e.g. for fuse based overlays FS where the backing storage is xfs or something), that probably isn't applicable here.
If you're in kernel space though I don't think you'd have access to OpenCL so easily, you'd need to reimplement it based on kernel primitives.
> What is the overhead on a FUSE filesystem compared to being implemented in the kernel?
It depends on your use case.
If you serve most of your requests from kernel caches, then fuse doesn't add any overhead. That was the case for me, when I had a FUSE service running to directly serve all commits from all branches (from all of history) at the same time as directories directly from the data in a .git folder.