[go: up one dir, main page]

|
|
Log in / Subscribe / Register

MeeGo rethinks privacy protection

By Jonathan Corbet
April 13, 2011
Companies operating in the handset market have different approaches to almost everything, but they do agree on one thing: they have seen the security problems which plague desktop systems and they want no part of them. There is little consistency in how the goal of a higher level of security is reached, though. Some companies go for heavy-handed central control of all software which can be installed on the device. Android uses sandboxing and a set of capabilities enforced by the Dalvik virtual machine. MeeGo's approach has been based on traditional Linux access control paired with the Smack mandatory access control module. But much has changed in the MeeGo world, and it appears that security will be changing too.

In early March, the project sent out a notice regarding a number of architectural changes made after Nokia's change of heart. With regard to security, the announcement said:

In the long-term, we will re-evaluate the direction we are taking with MeeGo security with a new focus on *End-User Privacy*. While we do not intend to immediately remove the security enabling technologies we have been including in MeeGo, all security technologies will be re-examined with this new focus in mind.

It appears that at least some of this reexamination has been done; the results were discussed in this message from Ryan Ware which focused mainly on the problem of untrusted third-party applications.

The MeeGo project, it seems, is reconsidering its decision to use the Smack access control module; a switch to SELinux may be in the works. SELinux would mainly be charged with keeping the trusted part of the system in line. All untrusted code would be sandboxed into its own container; each container gets a disposable, private filesystem in the form of a Btrfs snapshot. Through an unspecified mechanism (presumably the mandatory access control module), these untrusted containers could be given limited access to user data, device resources, etc.

It probably surprised nobody that Casey Schaufler, the author of Smack, was not sold on the value of a change to SELinux. This change would, he said, add a great deal of complexity to the system without adding any real security:

SELinux as provided in distributions today does not, for all its trappings, complexity and assurances, enforce any security policy. SELinux is capable of enforcing a security policy, but no general purpose system available today provides anything more than a general description of the experienced behavior of a small subset of the system supplied applications.

The people who built SELinux fell into a trap that has been claiming security developers since computers became programmable. The availability of granularity must not be assumed to imply that everything should be broken down into as fine a granularity as possible. The original Flask papers talk about a small number of well defined domains. Once the code was implemented however the granularity gremlins swarmed in and now the reference policy exceeds 900,000 lines. And it enforces nothing.

Ryan's response was that the existing SELinux reference policy is not relevant because MeeGo does not plan to use it:

At this point I want nothing to do with the Reference Policy. I would much prefer to focus on a limited set of functionality around privacy controls. I know that means it won't necessarily exhibit "expected" SELinux behavior. Given the relatively limited range of verticals we are trying to support, I believe we will be able to get away with that.

What this means is that he is talking about creating a new SELinux policy from the beginning. The success of such an endeavor is, to put it gently, not entirely assured. The current reference policy has taken many years and a great deal of pain to reach its current state of utility; there are very few examples of viable alternative policies out there. Undoubtedly other policies are possible, and they need not necessarily be as complex as the reference policy, but anybody setting out on such a project should be under no illusions that it will be easily accomplished.

The motivation for the switch to SELinux is unclear; Ryan suggests that manufacturers have been asking for it. He also said that manufacturers would be able to adjust the policy for their specific needs, a statement that Casey was not entirely ready to accept:

There are very few integrators, even in the military and intelligence arenas, who feel sufficiently confident with their SELinux policy skills to do any tweaking that isn't directly targeted at disabling the SELinux controls.

Ryan acknowledged that little difficulty, but he seems determined to press on in this direction.

The end goal of all this work is said to be preventing the exposure of end-user data. That will not be an easy goal to achieve either, though. Once an application gets access to a user's data, even the firmest SELinux policy is going to have a hard time preventing the disclosure of that data if the application is coded to do so; Ryan has acknowledged this fact. Any Android user who pays attention knows that even trivial applications tend to ask for combinations of privileges (address book access and network access, for example) which amount to giving away the store. Preventing information leakage through a channel like that - while allowing the application to run as intended - is not straightforward.

So it may be that the "put untrusted applications in a sandbox and limit what they can see" model is as good as it's going to get. As Casey pointed out, applications are, for better or worse, part of the security structure on these devices. If an application has access to resources with security implications, the application must implement any associated security policy. That's a discouraging conclusion for anybody who wants to install arbitrary applications from untrusted sources.

Index entries for this article
SecurityDistribution security
SecurityLinux Security Modules (LSM)
SecuritySecurity Enhanced Linux (SELinux)


to post comments

MeeGo rethinks privacy protection

Posted Apr 14, 2011 9:09 UTC (Thu) by ortalo (guest, #4654) [Link] (1 responses)

Nice article. As usual in this context, it raises more questions than it can provide answers. Of course...
A small note too, as a reaction to the last sentence from someone paid for security (and not for usability). If all users want to install arbitrary applications from untrusted sources well... my first reaction is that the situation is desperate. Given such a design requirement, isn't the right design decision to _remove_ all security from the system?
As a friend of my users, I would probably feel guilty to abandon them with such insecure systems; but as an engineer, I have the feeling that the only sensible rational solution to such a requirement is the removal of security functions.
I am not happy with the situation, but it's because I do not have the same _requirements_; not because of conflicting or inadequate or complex security mechanisms.

I do not want to minimize the Smack vs SELinux debate, or to occult the (certainly necessary) work on expanding our practical experience on generic and/or more targeted security functions implementation.
However, shouldn't we focus on the high level security requirements? It seems this is something users are not able to achieve. But someone must do that and it does not seem to be easy.
We probably cannot express the "average user" security requirements, because most readers here are computer power users. But we could express our own security requirements. We are not so different from regular users: we care more about protecting the privacy of our family pictures than ASLR randomness quality, we protect more the access control to our online banking than our LWN account; etc. As an added bonus, there is probably an understanding of the impact of weak passwords storage and/or remotely accessible vulnerable system service and of many other technical subtleties (including SELinux configuration complexity) that average users do not have at all.
Maybe such requirements could provide those implementing security functions enough fuel for reaching a well recognized and generally useful target?

This reminds me also the FreedomBox initiative. Initially, I put a lot of hope in that idea; but now that I see their requirements, I'm less interested _personnally_. (I still find that very interesting as a security oriented project; but less for my personal usage.)

MeeGo rethinks privacy protection

Posted Apr 28, 2011 8:25 UTC (Thu) by renox (guest, #23785) [Link]

> If all users want to install arbitrary applications from untrusted sources well... my first reaction is that the situation is desperate.

All users no, but many do indeed.
As for your reaction, I disagree: if the application is isolated from the rest of system, then it shouldn't be able to harm it.

But there are two big issues
1) providing an isolation mechanism which is "usable enough" that the users won't disable it.
2) having all the normal applications using correctly this mechanism.
Otherwise the user will become used to giving all the right to every applications and no real security is provided..

MeeGo rethinks privacy protection

Posted Apr 14, 2011 21:16 UTC (Thu) by smoogen (subscriber, #97) [Link]

> That's a discouraging conclusion for anybody who wants to install arbitrary applications from untrusted sources.

Well mathematically I don't see how any other conclusion can be possible without calls to magical ponies and beefy miracles. I can see ways of making the risks less (say every application is considered its own user and runs inside some sort of tarpit) but once it shares anything.. it is up to it not share too much. and once it accepts anything.. it must not accept too much.

MeeGo rethinks privacy protection

Posted Apr 14, 2011 22:06 UTC (Thu) by dcg (subscriber, #9198) [Link]

I loved the quote from Casey Schaufler, I always though I was mostly alone in my unhapiness with SELinux...

Can we agree at least that it's not perfect? For example, as someone put it in the lkml, the SELinux relabeling process is the new fsck (in fact, it's much worse - fsck is really fast these days), and the solutions I've heard about (http://lkml.org/lkml/2010/3/9/148) only seem workarounds to me...

MeeGo rethinks privacy protection

Posted Apr 17, 2011 20:06 UTC (Sun) by alison (subscriber, #63752) [Link]

I don't see much difference between the current desktop situation and that with mobile except that the likelihood that users will install ill-behaving apps on mobile is higher. When I use my distro's package manager to install the Chromium browser, I get a warning that the program's source is untrusted. Is there a qualitative risk difference between the Chromium installation and downloading some random game from an online marketplace that I'm missing? Isn't the only sensible approach on mobile to install only signed packages from trusted sources, as on the desktop?

The two solutions I employ on the desktop are to keep the most sensitive data encrypted and to use virtualization (qemu-kvm) to encapsulate some operations (either the most sensitive or the most risky). The second approach is a variant of the Android sandboxing. I don't see what alternative security-naive users can sensibly pursue.


Copyright © 2011, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds