You can check socket credentials, indeed. You can set up filtering rules to match on UID using nftables. You can do things like put a cookie somewhere else to exchange and authenticate the connection a-la xauth. You could use TLS and check the host key vs. a public key stored at install time. There are many ways to do this, none of which require more than a few dozen lines of code/config.
But really the simplest thing would just be to use a port <1024 so that only root can open it. That's literally what the feature was for. You can still be "attacked", but only by someone who already has local root.
None of that (save for running as root, which is very crude, much less granular, and requires promoting privileges of the process in question to root) is "about the same amount of work" as using a unix socket directly.
If the daemon isn't running as root it can't put the socket in a secure location, requiring more code. That code isn't complicated, but neither are any of the suggestions above.
Once more: people wanting to make this security bug about the specific socket family in use are doing bad security analysis. There's nothing wrong with TCP, the app just did it wrong and failed to recognize the security boundary being crossed.
This is all well and good if you want to restrict access to root users, but I thought we were trying to restrict access "to a specific process" (i.e. a specific client application.)
Open the socket and drop privilege before launching the daemon. I mean, come on: inetd could do this back in 4.3BSD on a VAX.
I remain absolutely dumbfounded how people in this subthread are going to the matresses trying to explain why Unix sockets are great and TCP isn't, when they both suck in exactly the same way and the correct answer is "validate your input" and not "use a different API".
I'm not trying to explain why Unix sockets are great and TCP isn't... I'm trying to solve a real-world problem along a similar vein myself. FWIW, I agree that you should use Unix sockets for local-machine access - you can't accidentally expose them off the box like you can a TCP socket. But that's neither here nor there.
You seem to be misunderstanding the scenario I'm describing: I have a daemon that runs in a privileged context (as root.) I have a client that connects to the daemon, as any user on the box. The client cannot be run as root because the user does not have permission to do so.
I want to ensure that only my client can connect to the daemon. I can't use user/group permissions, because I don't care what user/group has access. I want to make sure a specific process (or a specific binary/executable) has access. To quote the comment I initially responded to:
> it's equally true that you could lock down a TCP socket to a specific process with about the same amount of work.
On a Unix machine, this is often done by creating a group to use for access (e.g. a docker group.) This works to lock down a TCP socket to a specific group but not to a specific process. Using shared secrets stored elsewhere on the box also doesn't help here, since any other process could access those secrets.
The best I know of is using something like XPC on macOS, using SO_GETPEERCRED and checksum'ing the pid out of /proc/<pid>/exe, or perhaps using some other platform-specific code signing API.
I was excited to hear that it was easy. I'm disappointed now.
But really the simplest thing would just be to use a port <1024 so that only root can open it. That's literally what the feature was for. You can still be "attacked", but only by someone who already has local root.