Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"But because of a subtlety of Unix systems (of which macOS is one), when an existing file is moved from another location to the root directory, it retains the same read-write permissions it previously had."

And ownership. (The sentence before that suggests "root directory" here means "directory owned by root".) The permissions on the directory apply only to the file's directory entry, not the file itself.

This isn't a "subtlety", it's how it works, and no-one who doesn't understand this should be writing macOS installers.

The correct approach is to open the file for read and copy its bytes into your private area. Only then check the cryptographic signature. Just renaming into your private directory isn't enough, even if that directory has no read or execute access to other users, because the attacker could have opened the file for write access before you did the rename.



Yup, this is a very common misconception. In Unix, directories are not folders. Directories are literally a directory, like a phone book. It lists files and their location on disk. A directory doesn't contain any other files. Moving a file from one directory to another only modifies the directories, not the file itself.

https://unix.stackexchange.com/questions/684122/what-permiss...


So what? Windows likes the word folders but it works the same way. And you could easily have file access permissions depend on the directory if you wanted to.


> So what?

Using the "folder" mental model leads to exactly the bugs in TFA. "Oh, the root folder can only be accessed by root, so if I put a file _inside_ that folder it will be protected". Thats not how it works.

> And you could easily have file access permissions depend on the directory if you wanted to.

No, because you can have multiple links in multiple directories all pointing to the same file.


> Using the "folder" mental model leads to exactly the bugs in TFA. "Oh, the root folder can only be accessed by root, so if I put a file _inside_ that folder it will be protected". Thats not how it works.

I don't really see the logic that way. Even if it was "inside", this permission failure would still happen. And directories, if used properly, can protect a 777 file from being changed. The error in the mental model is in how permissions work, not how files are organized.

> No, because you can have multiple links in multiple directories all pointing to the same file.

Well I put the word "access" there to try to be clearer. If you as system designer wanted to, you could have the path you take to a file affect permissions, even with the 'directory' model and multiple hard links. Heck, you could have permissions be on links instead of on files. YOLO.


> ... directories, if used properly, can protect a 777 file from being changed. The error in the mental model is in how permissions work, not how files are organized.

There is a second possible misconception that I did touch on in my last paragraph, but didn't spell out. On Unix, the permissions check is done when you open the file, not when you perform the read or write. This means that a user who cannot currently open the file (because directory permissions mean they have no way to get to the inode) can nonetheless alter it now if they opened it when they could. So you could rename the file from the attacker's directory into your installer's private directory, verify its cryptographic signature, but then the attacker injects their malware into the file before you start copying, and you install the malware.

Because the two common types of locks on Unix (BSD and POSIX record) are advisory, you can't just lock that file against writers before you check the signature. This is in contrast to Windows, where you can't even rename or delete the file if someone else has it open.


Making sure it's not already open by someone else is definitely part of "used properly".


How do I do that? fstat(2) is no help on macOS, and even if it were would return false positives from things like backup and content indexers ("Time Machine" and "Spotlight" on macOS).


You need to control the lifecycle of the file in some manner.

Or force a reboot, I guess?


>And directories, if used properly, can protect a 777 file from being changed.

Could you explain how this would work?


Say every hard link to the file is a descendant of a directory that blocks traversal. No subdirectories of those are already open, and the file is not already open. That keeps it safe, right? If there's any loopholes in that, they could be closed.


> Say every hard link to the file […]

Ehr, no. Again, there are no files in the conventional sense in UNIX file systems. There are collections of disk blocks pointed to by an i-node and one or more directory entries pointing to an i-node. It is possible to have an i-node with 0 directory entries linking to it as well as multiple unrelated (i.e. not hard links but truly disjoint directory entries) directory entries referencing the same i-node; both are treated as file system errors by fsck and will be fixed up at the next fsck run. Yet, both scenarios can be easily reproduced (without corrupting the file system!) in a file system debugger and live on for a while.

> […] a descendant of a directory that blocks traversal. No subdirectories of those […]

Directory entries in a UNIX file system do not ascend nor descend, they are linked into one or more directories whether they form a hierarchy or not.

A directory might be «protected» by, say, 700 permissions obscuring a particular directory entry, but if a hard link to the same i-node exists in a another unrelated directory outside the current hierarchy that has more permissive access, say 755, access to data blocks referenced to by an i-node has already leaked out.


The other reply already covered the definition of hard links. It's a directory entry that points to an inode.

And file system corruption is definitely a loophole.

> Directory entries in a UNIX file system do not ascend nor descend, they are linked into one or more directories whether they form a hierarchy or not.

All the filesystems I'm sufficiently aware of insist on directories being a tree. Every entry except the special .. descends in that tree. And each hard link is in a specific directory.

> if a hard link to the same i-node exists in a another unrelated directory outside the current hierarchy that has more permissive access

That's why I said every hard link to the file!


> And file system corruption is definitely a loophole.

Zero directory entries pointing to an i-node is not a file system corruption as it neither corrupts the file system nor breaks the file system semantics; it is possible to have a garbage collector running in the background to mop up orphaned i-nodes with the file system remaining fully operational at the same time.

Distinct i-nodes pointing to the same block allocation, on the other hand, are a security loop hole and create consistency problems. Whether they cause the file system corruption or not is a matter of an academic debate, though.

> All the filesystems I'm sufficiently aware of insist on directories being a tree. Every entry except the special .. descends in that tree. And each hard link is in a specific directory.

It is possible to design and write a file system implementation that will retain the UNIX file systems semantics of i-nodes and directory entries whilst remaining completely flat (i.e. no directory hierarchies, just «.»). Such a file sysem would be impractical for most use cases today but is easily possible, and such file systems had been a commonplace before the UNIX file system arrival.

Earlier on, you had mentioned: «If there's any loopholes in that, they could be closed». The example below (which is perfectly legit and does not contain semantic loopholes), which of directories does «file.txt» belong in or descends from/ascends into: 1) «a/b/c», 2) «d/e/f/g», 3) «.», 4) all of them? Which of the three directories is more specific and why, and what about future hard links?

  $ mkdir -p a/b/c
  $ mkdir -p d/e/f/g
  $ echo 'I am a file' >file.txt
  $ chown 0:0 file.txt 
  $ chmod 666 file.txt 
  $ ln file.txt a/b/c 
  $ ln file.txt d/e/f/g 
  $ sudo chown 0:0 a
  $ sudo chmod 700 a
  $ ls -l a
  ls: cannot open directory 'a': Permission denied
  $ ls -l d/e/f/g/file.txt
  -rw-rw-rw- 3 root wheel 12 Aug 14 23:59 d/e/f/g/file.txt
  $ ls -l ./file.txt
  -rw-rw-rw- 3 root wheel 12 Aug 14 23:59 ./file.txt
  $ echo 'Anyone can access me' >./file.txt 
  $ cat ./file.txt 
  Anyone can access me


> It is possible to design and write a file system implementation that will retain the UNIX file systems semantics of i-nodes and directory entries whilst remaining completely flat (i.e. no directory hierarchies, just «.»). Such a file sysem would be impractical for most use cases today but is easily possible, and such file systems had been a commonplace before the UNIX file system arrival.

Yeah, it would also be possible to design a system that doesn't enforce permissions.

The challenge here is whether you can make a reasonable design that's secure. Not whether any design would be secure; that's self-obviously false. Anyone doing the designing can choose not to use a special bespoke filesystem.

But I don't see how your described filesystem would cause problems. The directory entries are still descendants of the directories they are in. Apply the rest of the logic and those files are secure. It's easier, really, when you don't have to worry about subdirectories. If subdirectories don't exist, they can't be open.

> Earlier on, you had mentioned: «If there's any loopholes in that, they could be closed». The example below (which is perfectly legit and does not contain semantic loopholes), which of directories does «file.txt» belong in or descends from/ascends into: 1) «a/b/c», 2) «d/e/f/g», 3) «.», 4) all of them? Which of the three directories is more specific and why, and what about future hard links?

The file is not in a specific directory. Links to the file are in `pwd`, a/b/c, and d/e/f/g. "Being in" is the same as "descending from".

If you secure `pwd` (and nothing is already open), then all three hard links will be secured.

Or if you remove the hard link in `pwd`, and secure g (and nothing is already open), then the file will be secured.

"./a" descends from ".", one hard link to the file descends from ".", "./a/b" descends from "./a", "./a/b/c" descends from "./a/b", one hard link to the file descends from "./a/b/c". Plus the same for d/e/f/g, plus every transitive descent like "./a/b" descending from "." I hope that's what you mean by "more specific"?

If future hard links are made, then they follow the same rules. If any hard link is not secured, then the file is not secured. And a user without access to the file cannot make a new hard link to the file.


> Yeah, it would also be possible to design a system that doesn't enforce permissions.

It is even easier than that: one only has to simply detach the disk and reattach it to another UNIX box to gain access to any file as the file system itself is defenceless and offers no protection from the physical access to its on-disk layout. File system encryption is the only solution that makes physical impractical or at least convoluted.

And, since UNIX file systems delegate permissons checks to the kernel via the VFS, it is also possible for a person with nefarious intentions to modify the file system code to make it always return 777 for any i-node being accessed through it, find a local zero day exploit, load the rogue file system kernel module and remount file system(s) to bypass the permission enforcement in the kernel.

The reverse is also true: if the kernel and the file system support access control lists, standard UNIX file permissions become largely meaningless, and it becomes possible to grant or revoke access to/from a file owned by root with 600 permissions to an arbitrary user/group only. Using the same example from above:

  $ cat ./file.txt                         
  Anyone can access me
  $ sudo /bin/chmod +a "group:staff deny write" ./file.txt
  $ /bin/ls -le ./file.txt
  -rw-rw-rw-+ 3 root  wheel  21 14 Aug 23:59 ./file.txt
   0: group:staff deny write
  $ echo 'No-one from the staff group can access me any longer' >./file.txt
  zsh: permission denied: ./file.txt
  $ id
  uid=NNNNNN(morally.bold.mollusk) gid=MMMMMM(staff) groups=MMMMMM(staff),[… redacted …]
  $ ls -la ./file.txt
  -rw-rw-rw-+ 3 root wheel 21 Aug 15 17:28 ./file.txt
> The challenge here is whether you can make a reasonable design that's secure.

Indeed, rarely can security be bolted on with any measurable success, and a system can be secure only if it is secure by design. But security is also a game of the constant juggling of trade-offs that may or may not be acceptable in a particular use case. Highly secure designs are [nearly always] hostile to users and are a tremendous nuisance in the daily use. The UNIX answer to security is delegation of responsibilities: «I, UNIX, will do a reasonable job on keeping the system secure, but the onus is on you, user, to excercise the due diligence, and – oh, by the way – here is a shotgun to shoot yourself in the foot (and injure bystanders as a bonus) if you, the user, are negligent about keeping your data secure».

> "./a" descends from ".", one hard link to the file descends from ".", "./a/b" descends from "./a", "./a/b/c" descends from "./a/b", one hard link to the file descends from "./a/b/c". Plus the same for d/e/f/g, plus every transitive descent like "./a/b" descending from "." I hope that's what you mean by "more specific"?

The point I was trying to make was that specificness is a purely logical concept. In the aforementioned example, there are 3x directory entries at 3x arbitrary locations and any of them can be used to access the data referenced to via an i-node. Once a file is opened using either of those three directory entries, it is not possible to trace the open file descriptor back to a specific directory entry. Therefore, none of the three directory entries is more specific than the others – they are all equal.


> The point I was trying to make was that specificness is a purely logical concept. In the aforementioned example, there are 3x directory entries at 3x arbitrary locations and any of them can be used to access the data referenced to via an i-node. Once a file is opened using either of those three directory entries, it is not possible to trace the open file descriptor back to a specific directory entry. Therefore, none of the three directory entries is more specific than the others – they are all equal.

I see.

Then I would agree that every path is equally specific.

But I never wanted to trace a file descriptor back to a specific directory entry. The question that matters is whether all the directory entries for a file are in secure locations. That treats them all equally.

Also, part of the scenario I laid out is that the file is not open to begin with. (If you were to try to check if the file is open, that's outside the scenario, but also shouldn't care what directory entry was used.)


> multiple unrelated (i.e. not hard links but truly disjoint directory entries) directory entries referencing the same i-node

That's what a hard link is. What we call hard links in Unix/Linux is when you have multiple distinct directory entries referencing the same inode.


I might be misunderstanding what you are saying, but it sounds like a lot of reliance on people knowing and wanting to do the right thing?


I don't see the issue.

Let's put all subtleties about Unix directories to the side. Zoom wanted to change the permissions of a file so that only root could access it. The obviously correct way to do that is to simply change the permissions of the file.

Even if their solution of putting the file in a root dir worked the way they expected, it would be a circuitous and hacky solution.

> sounds like a lot of reliance on people knowing and wanting to do the right thing?

At a certain point, people need to have basic knowledge. There's a lock on your front door. It does not lock when you turn your lights off. The lock maker is not responsible if you expected it to.


> The obviously correct way to do that is to simply change the permissions of the file.

Obvious, but incorrect. As pointed out elsewhere, the permission check is done when a process opens a file, not when it performs read/write operations. So, an attacker could get a legitimate file, open in for writes, trigger the zoom update on it, zoom would then change the permissions to prevent writes, and then the attacker could modify the file using its already-open file handle.


I'm not sure what you're suggesting. Would you go to a doctor who doesn't know or want to do the right thing?


So how do you prevent bad actors from abusing this and/or train every developer on the planet to ensure they do they follow the right processes.

Sounds like its something that probably happens a lot and will continue to happen a lot?


Yes, and the way you'd end up doing that is by not having doctor telepathy.


No, we go to facebook sir.


Most of unix is like that though.


This cannot be said enough.


I'm not sure about all languages but this actually caught me by surprise. When you do a 'mv' command (tested on macos) it does not retain file permissions by default. You actually need to pass a special flag in order to do so.

Objective C does retain perms by default using some common move techniques.


Whatever library or tool you use will just call the `rename` syscall. As you would expect, rename simply renames a file. It deletes the old directory entry and creates a new one.

If you're using a library or tool to do this for you, you should know what it's doing.


What? Yes it does. You are confusing mv and cp.


This is a pretty good example. Sure, we should know our tools, but This is not intuitive behavior if you don't know.

https://superuser.com/questions/101676/is-there-some-differe...

It is not until the last comment of the accepted answer that you get to the difference in permissions. (along with the answers that are not accepted as best)


Or the parent could be trying to mv a file from one filesystem to another. In that case, mv will have to revert to a copy-then-delete operation, and the user may not have the permissions necessary to set up the new file with all the same metadata as the original.


Good point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: