[go: up one dir, main page]

|
|
Log in / Subscribe / Register

Filesystem vulnerabilities

Filesystem vulnerabilities

Posted Feb 21, 2013 1:06 UTC (Thu) by dgc (subscriber, #6611)
In reply to: Filesystem vulnerabilities by rwmj
Parent article: A story of three kernel vulnerabilities

> Vendors seem not to consider filesystem vulnerabilities to be serious
> (including Red Hat who I work for).

Vendors take them extremely seriously, but there's lots more to the process than "OMG!!! Security Problem! World ends at 5pm if we don't have a fix by then!". As a filesystem developer (who co-incidentally works for Red Hat, too) I have fixed my fair share of fsfuzz related bug reports over the years.

So, what's the real issue here? It's that most fuzzer "filesystem vulnerabilities" are either a simple denial-of-service (non-exploitable kernel crash), or are only possible to exploit when you *already have root* or someone does something stupid from a security perspective. However, once a problem is reported to the security process it is captured, and the security process takes over everything regardless of whether subsequent domain-expert analysis shows that the bug is security problem or not.

> For example OpenStack out of the box will mount untrusted guest
> filesystems on the host kernel,

This is a prime example of "doing something stupid from a security perspective". Virtualisation is irrelevant here - the openstack application is doing the equivalent of picking up a USB stick in the car park and plugging it into a machine on a secured network.....

However, to really understand the situation from an fs developer POV you need to understand a bit of history and a bit about risk. That is, any change to filesystem format validation routines has risk of causing corruption or false detection of corruption, and hence you can seriously screw over the entire filesystem's user base with a bad fix.

Think about it for a moment - a divide by zero crash on a specifically corrupted filesystem is simply not something occurs in production environments. However, the changes to the code that detects and avoids the problem is executed repeatedly by every single ext4 installation in existence. IOWs, the number of people that may be affected by the corrupted filesystem div0 problem is *exceedingly tiny*, while the number of people that could be affected by a bad fix is, well, the entire world.

Then consider that the CVE process puts pressure on the developers to fix the problem *right now* regardless of any other factors. Hence the fixes tend to rushed, not well thought out, are only lightly tested and not particularly well reviewed. In the filesystems game, than means the risk of regressions or the fix not working entirely as intended is significant.

In the past this risk was ignored for security fixes, and that's why we have a long history of needing to add more fixes to previous security fixes. We have proven that the risk of regressions from rushed fixes is real and it cannot be ignored. Hence -in this arena- the CVE process could be considered more harmful to users than leaving the problem unfixed while we take the usual slower, more considered release process. i.e. the CVE process (and measuring vendor performance with CVE open/close metrics) simply does not take into account that fixing bugs badly can be far worse for users than taking extra time to fix the bug properly.

Vendors that do due diligence (i.e. risk assessment of such bugs outside of the security process) are more likely to correctly classify fuzz-based filesystem bugs compared to the security process. Hence we see vendors mitigating the risk of regressions by testing the filesystem fixes fully before releasing them rather than rushing out a fix just to close a CVE quickly.

IOWs, -more often than not- vendors are doing exactly the right thing by their user base with respect to filesystem vulnerabilities. The vendors should be congratulated for improving on a process that had been proven to give sub-standard results, not flamed for it...

-Dave.


to post comments

Filesystem vulnerabilities

Posted Feb 21, 2013 1:55 UTC (Thu) by PaXTeam (guest, #24616) [Link]

the CVE process has nothing to do with how fast a bug is fixed. it's only concerned with cataloging bugs, disclosure/fix/etc strategy is always up to the vendor/author. so if you had a problem with rushed fixes in the past, look no further than your own managers who forced you to do it.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds