Supplementing CVEs with !CVEs: conflicts of interest
Supplementing CVEs with !CVEs: conflicts of interest
Posted Dec 6, 2023 18:31 UTC (Wed) by geofft (subscriber, #59789)Parent article: Supplementing CVEs with !CVEs
It is ludicrous to advertise this as a solution to conflicts of interest when the whole project seems to exist for the purpose of a security research firm being able to present their findings as legitimate. The CVE system has both false positives and false negatives (which is, honestly, unsurprising for such a complex system). The reason it is able to reject proposed vulnerabilities, and there's a clear conflict of interest when an entity whose motivation is having more CVEs focuses only on the problem of false negatives.
If you look at the company behind this, their services are all offense-based: all their trainings are about exploitation, not building secure systems, and their other services are penetration testing and data recovery. And they are clearly collecting CVEs and now !CVEs for advertising purposes. Organizations like this are exactly the kind of people causing the bogus CVE problem (which LWN covered well). The point of the CVE system, one would hope, is for people to actually keep themselves safe from vulnerabilities, and a company that sits solely on the side of finding more vulnerabilities and not keeping real-world systems secure has its own conflict of interest.
The specific "vulnerability" that they built this system for, NotCVE-2023-0001, is a secure boot bypass via voltage glitching. The vendor says that voltage glitching is outside of their threat model and not one of the things they advertise to customers that they're secure against. I don't know enough about this domain to evaluate whether this particular argument makes sense, but things outside the threat model are, again, precisely the kind of thing causing bogus CVEs. The LWN article gives three examples: an integer overflow "vulnerability" in the curl command line / API where a too-large value would just be interpreted as a smaller integer, a denial-of-service "vulnerability" in Postgres by an administrative user, and the fact that an unlocked password manager database contains credentials that you can read. In all of these cases, the upstream project has drawn the lines of their threat model in perfectly sensible ways, and it helps nobody except the ill-gotten reputations of the CVE discoverers to have vulnerability reports that require being on the trusted side of the threat model.
I do actually think that there should be a way, if this research firm believes that voltage glitching should be in-scope for these processors, to dispute the threat model. There are certainly vendors that draw the lines badly and to their convenience. But then the thing that should be filed is an objection to the threat model as a whole, not some specific attack on the assumption of a modified threat model, because you can generally find hundreds of similar attacks that rely on the same assumption. (The curl CLI can be subverted by LD_PRELOAD, a local sysadmin on the Postgres server can use ptrace, etc. The KeePasXC blog post explicitly says, "having lost control of your computer in this manner would mean the attacker could execute any number of security compromises against your KeePassXC database.")
In some cases the vendor will look obviously wrong and hopefully get shamed into fixing their approach to security, not just the one vulnerability. In some cases (see recent LWN articles on whether Linux kernel filesystem implementations can trust the disk or whether libbfd is secure to hostile input), there's some legitimately non-obvious debate about the threat model itself. And in some cases the researcher will look obviously wrong. In all of these cases, documenting what the vendor believes about the threat model is far more valuable for actual end users, largely because - again to the bogus CVE problem - many actual users probably do align with the vendor's envisioned threat model. Quoting a good article questioning the ReDoS vulnerability fad: "I’ll just be blunt about it: 99.9% of developers do not care about ReDoS 'vulnerabilities,' and they’re right not to care." Even those users who aren't aligned with the vendor's vision are usually better off realigning themselves (e.g. running the affected component in a sandbox or finding some alternative tool for what they're doing) to protect themselves from the hundreds of attacks that actually do exist under the threat model the vendor isn't using. Filing those hundreds of similar attacks one at a time, and patching for them, isn't useful to the users that aren't affected, nor does it actually keep the affected users safe, and especially for small OSS projects, it diverts maintainer attention in a way that probably leaves all users worse off. The only people who would actually benefit, again, are the folks who want to catch CVEs like they're Pokémon.
The only other NotCVE at writing, 2023-0002, is a crash on malformed input in a function that I think is intended to be called by local code. It shows all the signs of CVE abuse: the reporter is claiming it's "obviously reachable (if the fuzzer did then everyone can + it is part of the exposed functions of the module)" but with no analysis of whether it's reachable from untrusted inputs. These tools are all command-line utilities, and nothing in them implies that they're intended to be made accessible to remote users, let alone to untrusted remote users, yet the NotCVE website claims this has an attack vector of "network." This is, again, almost certainly an issue of the researcher having a threat model that is probably unwarranted, and it's exactly the sort of thing the CVE project should be rejecting!
Franky I don't think LWN should be giving these guys the publicity. This is just taking one of the major flaws of the CVE program and calling it a feature.