[go: up one dir, main page]

|
|
Log in / Subscribe / Register

/proc and directory permissions

By Jake Edge
October 28, 2009

In a discussion of the O_NODE open flag patch, an interesting, though obscure, security hole came to light. Jamie Lokier noticed the problem, and Pavel Machek eventually posted it to the Bugtraq security mailing list.

Normally, one would expect that a file in a directory with 700 permissions would be inaccessible to all but the owner of the directory (and root, of course). Lokier and Machek showed that there is a way around that restriction by using an entry in an attacking process's fd directory in the /proc filesystem.

If the directory is open to the attacker at some time, while the file is present, the attacker can open the file for reading and hold it open even if the victim changes the directory permissions. Any normal write to the open file descriptor will fail because it was opened read-only, but writing to /proc/$$/fd/N, where N is the open file descriptor number, will succeed based on the permissions of the file. If the file allows the attacking process to write to it, writing to the /proc file will succeed regardless of the permissions of the parent directory. This is rather counter-intuitive, and, even though it is a rather contrived example, seems to constitute a security hole.

The Bugtraq thread got off course quickly, by noting that a similar effect could be achieved creating a hardlink to the file before the directory permissions were changed. While that is true, Machek's example looked for that case by checking the link count on the file after the directory permissions had been changed. The hardlink scenario would be detected at that point.

One can imagine situations where programs do not put the right permissions on the files they use and administrators attempt to work around that problem by restricting access to the parent directory. Using this technique, an attacker could still access those files, in a way that was difficult to detect. As Machek noted, unmounting the /proc filesystem removes the problem, but "I do not think mounting /proc should change access control semantics."

There is currently some discussion of how, and to some extent whether, to address the problem, but a consensus (and patch) has not yet emerged.

Index entries for this article
KernelSecurity
SecurityLinux kernel


to post comments

/proc and directory permissions

Posted Oct 29, 2009 3:54 UTC (Thu) by virtex (subscriber, #3019) [Link] (6 responses)

I'm a little confused by this issue. When I look at the various fd directories under proc, I see entries like the following:

$ ls -ld /proc/*/fd
dr-x------ 2 root root 0 2009-10-28 22:45 /proc/1001/fd
dr-x------ 2 root root 0 2009-10-28 22:45 /proc/1002/fd
dr-x------ 2 root root 0 2009-10-28 22:45 /proc/1010/fd
dr-x------ 2 root root 0 2009-10-28 22:45 /proc/1012/fd
dr-x------ 2 gdm gdm 0 2009-10-28 22:45 /proc/1844/fd
dr-x------ 2 root root 0 2009-10-28 22:45 /proc/1980/fd

...

It looks like the file descriptors under proc are accessible to only the process owner and root, so an attacker wouldn't be able to get to them. Is this standard in the Linux kernel, or is my kernel (Ubuntu 9.04 and 9.10) patched to restrict the permissions?

/proc and directory permissions

Posted Oct 29, 2009 4:41 UTC (Thu) by jimparis (guest, #38647) [Link] (3 responses)

It's not as bad as you thought -- setting up the right situation is tricky.

Consider something like this setup:
$ sudo ls -al /dir
total 12
drwx------  2 root root 4096 2009-10-29 00:28 .
drwxr-xr-x 27 root root 4096 2009-10-29 00:28 ..
-rw-rw-rw-  1 root root    6 2009-10-29 00:28 file.txt
Now as an unprivileged user, you can't read or write the file, even though it's mode 0666, because the directory is mode 0700:
$ echo hi > /dir/file.txt
bash: /dir/file.txt: Permission denied
Now here's the trick. Assume that you somehow have an open read-only file descriptor that refers to this file. In the bugtraq conversations, this was achieved by opening the file while the administrator was messing with permissions. But there are other cases — for example, a system daemon might have opened the file read-only and passed you the file descriptor over Unix sockets. Or you inherited a read-only file descriptor when your process was started.

Now, once you have this open fd, you can re-open it as read-write using the link in /proc/$YOUR_OWN_PID/fd/ — which is allowed because the file is mode 0666, even though the directory typically wouldn't allow you to do that.

A source of contention is whether this is unexpected. It's certainly not completely obvious.

/proc and directory permissions

Posted Oct 29, 2009 4:59 UTC (Thu) by jimparis (guest, #38647) [Link] (2 responses)

Here is an example that shows the non-obvious behavior:
$ sudo su
# mkdir -m 0700 /dir
# echo "safe" > /dir/file.txt
# chmod 0666 /dir/file.txt
# ls -al /dir
total 12
drwx------  2 root root 4096 2009-10-29 00:28 .
drwxr-xr-x 27 root root 4096 2009-10-29 00:28 ..
-rw-rw-rw-  1 root root    7 2009-10-29 00:43 file.txt
# cat file.txt
safe
Now user "nobody" cannot read or write this file:
# su nobody -c 'cat /dir/file.txt'
sh: /dir/file.txt: Permission denied
# su nobody -c 'echo "hacked" > /dir/file.txt'
sh: /dir/file.txt: Permission denied
If we provide an open read-only file descriptor (as stdin, fd 0), they can read it:
# su nobody -c 'cat <&0' < /dir/file.txt
safe
But they still can't write to this descriptor:
# su nobody -c 'echo "hacked" >&0' < /dir/file.txt
sh: line 0: echo: write error: Bad file descriptor
Unless we re-open the file using the magic link in /proc:
# su nobody -c 'echo "hacked" >/proc/self/fd/0' < /dir/file.txt
# cat /dir/file.txt
hacked

/proc and directory permissions

Posted Oct 30, 2009 0:33 UTC (Fri) by giraffedata (guest, #1954) [Link] (1 responses)

There's something missing from the explanation of why this is a problem, because the basic idea that you can open a file before permissions to it are supposedly revoked and then keep using the file doesn't require any /proc/PID/fd magic.

The scenarios show an attacker opening read-only and then escalating to read-write after some permissions were changed, but the attacker could just as easily have opened read-write in the first place.

Are we supposed to imagine some scenario in which the system administrator ensures only read-only opens have happened at the time he changes the directory permission and thus assumes the file is safe from writing?

/proc and directory permissions

Posted Oct 30, 2009 3:26 UTC (Fri) by jimparis (guest, #38647) [Link]

> The scenarios show an attacker opening read-only and then escalating to
> read-write after some permissions were changed

No it didn't. No permissions were changed between the time the attacker had a read-only fd and when the attacker managed to get a read-write fd.

- The attacker could not open the file (neither read-only nor read-write)
- The superuser gave the attacker a read-only handle to the file
- The attacker turned it into a read-write handle

No permissions changes were involved, this is not a race condition.

/proc and directory permissions

Posted Oct 30, 2009 0:26 UTC (Fri) by giraffedata (guest, #1954) [Link] (1 responses)

It looks like the file descriptors under proc are accessible to only the process owner and root, so an attacker wouldn't be able to get to them.

The attacker is the process owner. The attacker opened the file back when he was permitted to do so.

/proc and directory permissions

Posted Oct 30, 2009 10:09 UTC (Fri) by nix (subscriber, #2304) [Link]

Or the attacker was handed the fh by a daemon running as someone else.

/proc and directory permissions

Posted Oct 29, 2009 14:52 UTC (Thu) by RobSeace (subscriber, #4435) [Link]

> While that is true, Machek's example looked for that case by checking the
> link count on the file after the directory permissions had been changed.
> The hardlink scenario would be detected at that point.

Well, in that case, you can detect this new scenario with a simple "lsof"...
If they can be expected to check the link count as defense, surely they can
also check for already open FDs for files that once were perfectly accessible
when they change the perms to render them inaccessible...

Also, while changing the perms on the directory, why not go the further step
of changing the file perms as well? It would seem a logical and reasonable
thing to do...

Magic filesystems are too clever

Posted Oct 29, 2009 23:09 UTC (Thu) by quotemstr (subscriber, #45331) [Link] (3 responses)

Kernighan wrote:
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.

Made in classic ha-ha only serious fashion, the statement is as true as ever. Magic filesystems like procfs and sysfs are too clever, and create many problems, ranging from security (in the procfs case) to fundamental correctness, in the sysfs case. And for what? Being able to read a process' FD table with ls, or to change a system's current sleep state with cat?

Nobody does that. People really use tools like udev, and lsof, and ps to manipulate entries in the magic filesystem, and these tools could be better implemented using sysctl(2) and new system calls.

Magic filesystem introduce needless complexity and obscure correctness and security problems while at the same time providing nothing over other approaches. Magic filesystems are a bad idea, and ought to be slowly deprecated in favor of sysctl (for generic key-value manipulation) and special system calls (for everything else).

Magic filesystems are too clever

Posted Oct 29, 2009 23:11 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

Err, hate to reply to myself, but I should point out my previous post, sysfs is dumb.

Magic filesystems are too clever

Posted Oct 30, 2009 10:15 UTC (Fri) by nix (subscriber, #2304) [Link] (1 responses)

Ew. No thanks. I find the /proc/$PID directories utterly crucial for all sorts of things. The problem with needing tools and sysctls to get at things is that they're not shell-compatible: you *must* write a tool to access it, and in extremis you can't do this because there isn't time; but you *can* cd into a directory and use ordinary shell tools. It's also crucial for non-emergency but adhoc stuff, which is a huge proportion of the stuff people actually do (as the non-adhoc stuff can be automated).

And for systems-administration stuff, well, am I the only person who's ever done a grep -R of /proc/sys/? Surely not.

Being shell-transparent is a huge huge huge feature. Don't break it.

Bringing up ps(1) as a counterargument is ridiculous: the reason ps(1) exists is both compatibility with Unix and that it can provide heaps of features that would be really annoying to implement by bashing on /proc yourself. But having /proc made ps(1) a hell of a lot easier to implement than it would have been otherwise (how else would you do it? grovelling through /dev/kmem, like traditional Unix? files in /proc with heaps of magic ioctl()s, like Solaris? Oh, please. We're moving *away* from that sort of opaque nightmare.)

(I'd actually quite like rm -r /proc/$PID to be made equivalent to kill -9, but I haven't implemented that or even tried to.)

Magic filesystems are too clever

Posted Oct 30, 2009 21:33 UTC (Fri) by foom (subscriber, #14868) [Link]

While I certainly agree that /proc/$PID is a useful feature...

> sysctls [...] not shell-compatible

Say what? "sysctl -w variable=value" is quite shell-friendly. For kernel *configuration*, I think the
sysctl interface made (and still makes!) a lot more sense than the 20 different mechanisms added
since then.

"sysctl -a | grep Whatever" is just as good -- perhaps even better -- than recursive grep against
/proc/sys, /sys/, and whatever the new-userspace-interface-of-the-month this month is...


Copyright © 2009, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds