Security
Core Infrastructure Initiative best-practices badge
The Linux Foundation Core Infrastructure Initiative (CII) has recently announced the general availability of its best-practices badge project, which is meant to help projects follow practices that will improve their security. I'm the technical lead of the project, which is also known as the "badging project". In this article I'll focus on what the badge criteria currently are, including how they were developed and some specific examples, as well as talk about the project as a whole. But first, a little history.
In 2014, the Heartbleed vulnerability was found in the OpenSSL cryptographic library. This vulnerability raised awareness that there are some vitally important free/libre and open source software (FLOSS) projects that have serious problems. In response, the Linux Foundation created the CII to fund and support critical elements of the global information infrastructure. The CII has identified and funded specific important projects, but it cannot fund all projects. So the CII is also funding some approaches to generally improve the security of FLOSS.
The badge
The latest CII project, which focuses on improving security in general, is the "best-practices badge" project. CII believes that FLOSS projects that follow best practices are more likely to be healthy and to produce better software in many respects, including having better security. Most project members want their projects to be healthy, and users prefer to depend on healthy projects. Without a list of best practices, it's easy to overlook something important.
FLOSS projects that adequately follow the best practices can get a badge to demonstrate that they do. It costs no money to get a badge and filling in the form takes less than an hour. Note that the CII best-practices badge is for a project, not for an individual, since project members can change over time.
There really is a problem today; some projects are not applying the hard-learned lessons of other projects. Many projects are not released using a FLOSS license, yet their developers often appear to (incorrectly) think they are FLOSS projects. Ben Balter's 2015 presentation "Open source licensing by the numbers" suggested that on GitHub, 23% of the projects with 1000 or more stars had no license at all. Omitting a FLOSS license tends to inhibit project use, co-development, and review (including security reviews).
Some projects (like american fuzzy lop) do not have a public version-controlled repository, making it difficult for others to track changes or collaborate. Some projects only provide unauthenticated downloads of their code using HTTP, making it possible for attackers to subvert software downloads en route. Some projects don't provide any information on how to submit vulnerability reports (are you supposed to use the usual bug tracker?); this can create unnecessary delays in vulnerability reporting and handling. Many projects don't use any static source code analysis tools, even though these tools can find defects (including vulnerabilities).
OpenSSL before Heartbleed is an example. The OpenSSL project at the time of Heartbleed had a legion of problems. For example, its code was hard to read (there was no standard coding style and its code was notoriously complex), making it difficult to review. Unsurprisingly, vulnerabilities (like Heartbleed) are more likely to slip in when code is difficult to review. The best-practices criteria were not created with OpenSSL specifically in mind, but one of its project members went back and found that the OpenSSL project before Heartbleed failed to meet about one-third of the current best-practices criteria.
Of course, there are a massive number of practices that could together be called "best practices". The term "best practices" is really just a commonly-used term for some set of recommended practices.
Let's first admit the limitations on any set of practices. No set of practices can guarantee that software will never have defects or vulnerabilities. Even formal methods can fail if the specifications or assumptions are wrong. Nor is there any set of practices that can guarantee that a project will sustain a healthy and well-functioning development community.
However, following best practices can help improve the results of projects. For example, some practices enable or encourage multi-person review, or can make review more effective at finding defects (including defects that lead to vulnerabilities).
Perhaps the most important step towards developing the criteria (and the web application that implements them) was the decision that the project would itself be developed as a FLOSS project. The web application is under the MIT license; all text (including the criteria) are dual-licensed under the MIT or CC-BY version 3.0 (or later) licenses. The CII publicly set up the project on GitHub, created some early draft criteria, and invited feedback.
Producing the criteria
The initial criteria were primarily based on reviewing a lot of existing documents about what FLOSS projects should do, and those were in turn based on observing existing successful projects. A good example, and probably the single most influential source, is Karl Fogel's book Producing Open Source Software. Many people provided feedback or contributed to the badging project, including Dan Kohn, Emily Ratliff, Karl Fogel, Greg Kroah-Hartman (the Linux kernel), Rich Salz (OpenSSL), Daniel Stenberg (curl), Sam Khakimov, Doug Birdwell, Alton Blom, and Dale Visser.
A web application was developed for FLOSS project members to fill-in information; that web application project fulfilled the criteria, so it got its own badge. This effort helped steer the project away from impractical criteria. The project also got some early "alpha tester" projects to try out early drafts and provide feedback, in particular to ensure that the criteria would apply to both big projects (like the Linux kernel) and small projects (like curl). For example, there is no criterion requiring 100% statement coverage for tests; that can be a useful goal, but on many projects that's impractical (especially if it requires unusual hardware) or not worth pursuing.
Getting a badge intentionally doesn't require or forbid any particular services or programming languages. A lot of people use GitHub, and in those cases the web application automatically fills in some of the form based on data from GitHub, but projects do not have to use GitHub.
Scale is also a key issue. An evaluation process that takes a year or more, or costs hundreds of thousands of dollars, cannot be applied to all of the vast number of FLOSS projects. In-depth evaluation is not bad, of course, but the project is trying to be useful for a large set of FLOSS projects. Instead of requiring expensive third-party assessment, the focus is on self-assessment combined with automation.
Self-assessment can have its problems, but it scales much better and there are several approaches to help counter the problems of self-assessment like false claims. First, all the results are made public, so anyone can check the claims. Second, the web application also includes automation that checks entries before they are saved — and in some cases it overrides user information if it's false or inadequately justified. Finally, the CII does review project entries (particularly if they claim to be passing) and can delete or fix entries (e.g., if they are false or irrelevant). This emphasis on self-assessment does mean that the badging project had to try to write criteria that could be clearly understood directly by the projects.
Currently, the focus is on identifying best practices that well-run projects typically already follow. The project leads decided that it was more important to come up with a smaller set of widely-applied best practices. That way, all projects can be helped to reach some minimum bar that is widely accepted. The project was especially interested in criteria that help enable multi-person review or tend to improve security. The criteria also had to be relevant, attainable by typical FLOSS projects, and clear. It was also preferred to add criteria if at least one project didn't follow the practice. After all, if everyone does it without exception, it'd be a waste of time to add it as a criterion.
In the longer term, there are plans to add higher badge levels beyond the current "passing" level, tentatively named the "gold" and "platinum" levels. Projects that are widely depended on and are often attacked, such as the Linux kernel or any cryptographic library, should, of course, be doing much more than a minimum set of widely-applied best practices. However, the project team decided to create the criteria in stages.
There is an expectation that once a number of projects get a passing badge (and provide feedback), the badging project will be in a better position to determine the criteria for higher levels. You can see a list of some of the proposed higher-level criteria in the "other" criteria documentation. If you think of others, or think some are especially important, please let the badging project know.
One intentional omission is anything actually requiring an active development community, multi-person review, or multiple developers (e.g., a high "bus factor"). Obviously, having more reviewers or developers within an active community is much better for a project and users should normally prefer such projects. However, in many cases this is not directly under a project's control. For example, some projects are so specialized that they're not likely to attract many reviewers or developers and new projects often can't meet such criteria. For the initial badge level, the focus is, instead, on things that project members can directly control. Meeting the badge criteria should help projects grow and sustain a healthy, well-functioning, and active development community. Higher badge levels will almost certainly add criteria requiring a larger active community and a minimum bus factor (at least more than one).
The criteria
Once the initial criteria were identified, they were grouped into the following categories: basics, change control, reporting, quality, security, and analysis. Below, a few of the 66 criteria (including their identifiers) are described, along with why they're important.
The "basics" group includes basic requirements for the project. This includes requiring either a project or repository URL (the web application uses this to automatically fill in some information). Examples include:
- Criterion
floss_license states that
"
the software MUST be released as FLOSS
" and FLOSS is defined as software "released in a way that meets the Open Source Definition or Free Software Definition
". These criteria were designed for FLOSS projects and are meant to encourage collaborative development and review. That doesn't make sense when there's no legal basis for the collaboration. - Criterion
sites_https
says that "
the project sites (web site, repository, and download URLs) MUST support HTTPS using TLS
" This is obviously a more security-oriented requirement. It's sparked some controversy, because GitHub pages do not fully support HTTPS. Although users can retrieve *.github.io pages using HTTPS, these pages are still vulnerable to interception and malicious modification because, at this time, they are retrieved via CloudFlare, which retrieves these files without using HTTPS. In addition, many projects have a custom domain (typically the project's name) with a web site served via GitHub pages and these cannot currently be protected by HTTPS at all. One compromise being discussed is to only require that the repository and download URLs use HTTPS, since that would at least protect the software while it's downloaded.
The "change control" group focuses on managing change, including having ways to report problems, issue/bug trackers, and version-control software. Examples include:
- The
repo_public
criterion says that
"
the project MUST have a version-controlled source repository that is publicly readable and has a URL
". Version control greatly reduces the risks of changes being dropped or incorrectly applied and makes it much easier to apply changes. - Criterion
vulnerability_report_process
says:
"
The project MUST publish the process for reporting vulnerabilities on the project site
". This makes it much easier for security researchers to provide their reports — and thus makes it more likely that they'll happen. Many bug reporting systems are public, and it's not obvious to outsiders if projects will want security bug reports to be public or not. A surprising number of projects didn't meet this criterion, even though this can be as simple as putting one sentence on the project web site.
The "quality" group focuses on general software quality, including a project's build process and automated test suite. Examples include:
- Criterion test:
"
The project MUST have at least one automated test suite that is publicly released as FLOSS (this test suite may be maintained as a separate FLOSS project)
". An automated test suite makes it much easier to detect many mistakes before users have to deal with them. Test suites can always be improved; the key is to have one that can be improved. - The
warnings
criterion says:
"
The project MUST enable one or more compiler warning flags, a 'safe' language mode, or use a separate 'linter' tool to look for code quality errors or common simple mistakes, if there is at least one FLOSS tool that can implement this criterion in the selected language
". These flags and tools can detect some defects, some of which may be security vulnerabilities. In addition, these mechanisms can warn about awkward constructs that make code hard to read.
The "security" group lists criteria specific to improving software security. Examples include:
- The
know_secure_design
criterion states:
"
The project MUST have at least one primary developer who knows how to design secure software. This requires understanding the following design principles, including the 8 principles from Saltzer and Schroeder...
". There are a number of well-known design principles for designing secure software, such as using fail-safe defaults (access decisions should deny by default and installation should be secure by default). Knowing these principles can reduce the likelihood or impact of vulnerabilities. - Criterion
know_common_errors:
"
At least one of the primary developers MUST know of common kinds of errors that lead to vulnerabilities in this kind of software, as well as at least one method to counter or mitigate each of them
". Most vulnerabilities stem from a small set of well-known kinds of errors, such as vulnerabilities from SQL injections and buffer overflows. Knowing what they are (and how to counter or mitigate them) can result in an order-of-magnitude reduction in the number of vulnerabilities. It'd be best if all of the developers knew this but, if one does, that person can teach the others. The biggest problem is when no developer knows this information. - Criterion
crypto_published
states:
"
The project's cryptographic software MUST use only cryptographic protocols and algorithms that are publicly published and reviewed by experts
". Home-grown cryptography is vulnerable cryptography. You need to have an advanced degree in mathematics or a related field and have specialized for years in cryptography, before you know enough to create new cryptographic protocols and algorithms that can stand up to today's aggressive adversaries.
The "analysis" group lists criteria specific to analyzing software. Examples include:
- Criterion
static_analysis
requires:
"
At least one static code analysis tool MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language. A static code analysis tool examines the software code (as source code, intermediate code, or executable) without executing it with specific inputs. For purposes of this criterion, compiler warnings and "safe" language modes do not count as static code analysis tools (these typically avoid deep analysis because speed is vital).
" Static code-analysis tools (designed for that purpose) can dig deep into code and find a variety of problems. It's true that these tools can't find everything, but the idea is to try to find and fix the problems that can be found this way. - Criterion
dynamic_analysis
says:
"
It is SUGGESTED that at least one dynamic analysis tool be applied to any proposed major production release of the software before its release
". Dynamic-analysis tools can find vulnerabilities that static-analysis tools often miss (and vice versa), so it's best to use both. It'd be nice to use them on every commit, but on some projects that's impractical; typically, though, they can be applied to every release.
The criteria will change slowly, probably annually, as the project gets more feedback and the set of best practices in use changes. The current plan is to add proposed criteria as "future" criteria, which are added to the web application but are initially ignored. That will give projects time to meet the new criteria (and show that they do), justify modifying the criteria, or justify removing it from the set of proposed criteria.
For example, the
hardening criterion is currently
a planned addition; it would require that
"hardening mechanisms be used so software defects
are less likely to result in security vulnerabilities
".
The current plan is that this criterion would be added at
the "passing" level for all projects in 2017.
Projects that don't meet the updated criteria by the update deadline
would lose their "passing" status until they fixed the problem.
This process is similar to a "recertification" process but
is hopefully less burdensome.
FLOSS projects that have already achieved the badge include the Linux kernel, curl, Node.js, GitLab, OpenBlox, OpenSSL, and Zephyr. I encourage any FLOSS project member to go to the site and get their badge. If you have comments on the criteria (including for higher levels to be developed), please submit comments using the GitHub issue tracker or project mailing list.
Brief items
Security quotes of the week
Wolf: Stop it with those short PGP key IDs!
At his blog, Gunnar Wolf urges developers to stop using
"short" (eight hex-digit) PGP key IDs as soon as possible. The
impetus for the advice originates with Debian's Enrico Zini, who recently
found two keys sharing the same short ID in the wild. The
possibility of short-ID collisions has been known for a while, but it
is still disconcerting to see in the wild. "Those three keys
are not (yet?) uploaded to the keyservers, though... But we can expect
them to appear at any point in the future. We don't know who is behind
this, or what his purpose is. We just know this looks very
evil.
"
Wolf goes on to note that short IDs are not merely human-readable
conveniences, but are actually used to identify PGP keys in some
software programs. To mitigate the risk, he recommends configuring
GnuPG to never shows short IDs, to ensure that other programs do not
consume short IDs, and to "only sign somebody else's key if you
see and verify its full fingerprint. [...] And there are surely many other important recommendations. But this is a good set of points to start with.
"
New vulnerabilities
chromium-browser: multiple vulnerabilities
| Package(s): | chromium-browser | CVE #(s): | CVE-2016-1696 CVE-2016-1697 CVE-2016-1698 CVE-2016-1699 CVE-2016-1700 CVE-2016-1701 CVE-2016-1702 CVE-2016-1703 | ||||||||||||||||||||||||||||||||
| Created: | June 3, 2016 | Updated: | June 8, 2016 | ||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory: CVE-2016-1696 - chromium-browser: cross-origin bypass in extension bindings. CVE-2016-1697 - chromium-browser: cross-origin bypass in blink. CVE-2016-1698 - chromium-browser: information leak in extension bindings. CVE-2016-1699 - chromium-browser: parameter sanitization failure in devtools. CVE-2016-1700 - chromium-browser: use-after-free in extensions. CVE-2016-1701 - chromium-browser: use-after-free in autofill. CVE-2016-1702 - chromium-browser: out-of-bounds read in skia. CVE-2016-1703 - chromium-browser: various fixes from internal audits. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
dhcpcd5: code execution
| Package(s): | dhcpcd5 | CVE #(s): | CVE-2014-7912 | ||||
| Created: | June 7, 2016 | Updated: | June 8, 2016 | ||||
| Description: | From the CVE entry:
The get_option function in dhcp.c in dhcpcd before 6.2.0, as used in dhcpcd 5.x in Android before 5.1 and other products, does not validate the relationship between length fields and the amount of data, which allows remote DHCP servers to execute arbitrary code or cause a denial of service (memory corruption) via a large length value of an option in a DHCPACK message. | ||||||
| Alerts: |
| ||||||
expat: two vulnerabilities
| Package(s): | expat | CVE #(s): | CVE-2012-6702 CVE-2016-5300 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 8, 2016 | Updated: | June 21, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
CVE-2012-6702: It was introduced when CVE-2012-0876 was addressed. Stefan Sørensen discovered that the use of the function XML_Parse() seeds the random number generator generating repeated outputs for rand() calls. CVE-2016-5300: It is the product of an incomplete solution for CVE-2012-0876. The parser poorly seeds the random number generator allowing an attacker to cause a denial of service (CPU consumption) via an XML file with crafted identifiers. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
glibc: denial of service
| Package(s): | glibc | CVE #(s): | CVE-2016-4429 | ||||||||||||||||||||||||
| Created: | June 7, 2016 | Updated: | August 1, 2016 | ||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
A stack frame overflow flaw was found in the glibc's clntudp_call(). A malicious server could use this flaw to flood a connecting client application with ICMP and UDP packets, triggering the stack overflow and resulting in a crash. clntudp_call() contains an alloca call in a loop, which causes it to consume very large amounts of stack space. The same faulty code is also present in the libtirpc library. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
libpdfbox-java: XML External Entity (XXE) attacks
| Package(s): | libpdfbox-java | CVE #(s): | CVE-2016-2175 | ||||||||||||||||||||
| Created: | June 8, 2016 | Updated: | July 19, 2016 | ||||||||||||||||||||
| Description: | From the CVE entry:
Apache PDFBox before 1.8.12 and 2.x before 2.0.1 does not properly initialize the XML parsers, which allows context-dependent attackers to conduct XML External Entity (XXE) attacks via a crafted PDF. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
libxml2: multiple vulnerabilities
| Package(s): | libxml2 | CVE #(s): | CVE-2015-8806 CVE-2016-2073 | ||||||||||||||||||||||||||||||||
| Created: | June 3, 2016 | Updated: | June 8, 2016 | ||||||||||||||||||||||||||||||||
| Description: | From the CVE entries: CVE-2015-8806 - dict.c in libxml2 allows remote attackers to cause a denial of service (heap-based buffer over-read and application crash) via an unexpected character immediately after the "<!DOCTYPE html" substring in a crafted HTML document. CVE-2016-2073 - The htmlParseNameComplex function in HTMLparser.c in libxml2 allows attackers to cause a denial of service (out-of-bounds read) via a crafted XML document. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
mozilla: multiple vulnerabilities
| Package(s): | firefox thunderbird seamonkey | CVE #(s): | CVE-2016-2815 CVE-2016-2818 CVE-2016-2819 CVE-2016-2821 CVE-2016-2822 CVE-2016-2825 CVE-2016-2828 CVE-2016-2829 CVE-2016-2831 CVE-2016-2832 CVE-2016-2833 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 8, 2016 | Updated: | August 12, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Arch Linux advisory:
- CVE-2016-2815 (arbitrary code execution): Mozilla developers and community members reported several memory safety bugs in the browser engine used in Firefox and other Mozilla-based products. Some of these bugs showed evidence of memory corruption under certain circumstances, and we presume that with enough effort at least some of these could be exploited to run arbitrary code. - CVE-2016-2818 (arbitrary code execution): Mozilla developers and community members reported several memory safety bugs in the browser engine used in Firefox and other Mozilla-based products. Some of these bugs showed evidence of memory corruption under certain circumstances, and we presume that with enough effort at least some of these could be exploited to run arbitrary code. - CVE-2016-2819 (arbitrary code execution): Security researcher firehack reported a buffer overflow when parsing HTML5 fragments in a foreign context such as under an <svg> node. This results in a potentially exploitable crash when inserting an HTML fragment into an existing document. - CVE-2016-2821 (arbitrary code execution): Security researcher firehack used the Address Sanitizer tool to discover a use-after-free in contenteditable mode. This occurs when deleting document object model (DOM) table elements created within the editor and results in a potentially exploitable crash. - CVE-2016-2822 (addressbar spoofing): Security researcher Jordi Chancel reported a method to spoof the contents of the addressbar. This uses a persistent menu within a <select> element, which acts as a container for HTML content and can be placed in an arbitrary location. When placed over the addressbar, this can mask the true site URL, allowing for spoofing by a malicious site. - CVE-2016-2825 (same-origin policy bypass): Security researcher Armin Razmdjou reported that the location.host property can be set to an arbitrary string after creating an invalid data: URI. This allows for a bypass of some same-origin policy protections. This issue is mitigated by the data: URI in use and any same-origin checks for http: or https: are still enforced correctly. As a result cookie stealing and other common same-origin bypass attacks are not possible. - CVE-2016-2828 (arbitrary code execution): Mozilla community member jomo reported a use-after-free crash when processing WebGL content. This issue was caused by the use of a texture after its recycle pool has been destroyed during WebGL operations, which frees the memory associated with the texture. This results in a potentially exploitable crash when the texture is later called. - CVE-2016-2829 (visual user confusion): Security researcher Tim McCormack reported that when a page requests a series of permissions in a short timespan, the resulting permission notifications can show the icon for the wrong permission request. This can lead to user confusion and inadvertent consent given when a user is prompted by web content to give permissions, such as for geolocation or microphone access. - CVE-2016-2831 (clickjacking): Security researcher sushi Anton Larsson reported that when paired fullscreen and pointerlock requests are done in combination with closing windows, a pointerlock can be created within a fullscreen window without user permission. This pointerlock cannot then be cancelled without terminating the browser, resulting in a persistent denial of service attack. This can also be used for spoofing and clickjacking attacks against the browser UI. - CVE-2016-2832 (information leakage): Mozilla developer John Schoenick reported that CSS pseudo-classes can be used by web content to leak information on plugins that are installed but disabled. This can be used for information disclosure through a fingerprinting attack that lists all of the plugins installed by a user on a system, even when they are disabled. - CVE-2016-2833 (cross-site scripting): Mozilla engineer Matt Wobensmith reported that Content Security Policy (CSP) does not block the loading of cross-domain Java applets when specified by policy. This is because the Java applet is loaded by the Java plugin, which then mediates all network requests without checking against CSP. This could allow a malicious site to manipulate content through a Java applet to bypass CSP protections, allowing for possible cross-site scripting (XSS) attacks. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
ntp: multiple vulnerabilities
| Package(s): | ntp | CVE #(s): | CVE-2016-4953 CVE-2016-4954 CVE-2016-4955 CVE-2016-4956 CVE-2016-4957 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 6, 2016 | Updated: | June 21, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Arch Linux advisory:
- CVE-2016-4953 (distributed denial of service amplification): An attacker who knows the origin timestamp and can send a spoofed packet containing a CRYPTO-NAK to an ephemeral peer target before any other response is sent can demobilize that association. Credit to Miroslav Lichvar of Red Hat - CVE-2016-4954 (distributed denial of service amplification): An attacker who is able to spoof packets with correct origin timestamps from enough servers before the expected response packets arrive at the target machine can affect some peer variables and, for example, cause a false leap indication to be set. Credit to Jakub Prokes of Red Hat - CVE-2016-4955 (distributed denial of service amplification): An attacker who is able to spoof a packet with a correct origin timestamp before the expected response packet arrives at the target machine can send a CRYPTO_NAK or a bad MAC and cause the association's peer variables to be cleared. If this can be done often enough, it will prevent that association from working. Credit to Miroslav Lichvar of Red Hat - CVE-2016-4956 (distributed denial of service amplification): The fix for NtpBug2978 does not cover broadcast associations, so broadcast clients can be triggered to flip into interleave mode. Credit to Miroslav Lichvar of Red Hat - CVE-2016-4957 (distributed denial of service amplification): The fix for Sec 3007 in ntp-4.2.8p7 contained a bug that could cause ntpd to crash. Credit to Nicolas Edet of Cisco | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
openslp: denial of service
| Package(s): | openslp | CVE #(s): | CVE-2016-4912 | ||||||||||||||||
| Created: | June 8, 2016 | Updated: | June 13, 2016 | ||||||||||||||||
| Description: | From the Red Hat bugzilla:
A null pointer dereference vulnerability was found in function _xrealloc() in xlsp_xmalloc.c in OpenSLP. A remote attacker could potentially crash the server when large number of packets are sent. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
pgpdump: buffer overrun
| Package(s): | pgpdump | CVE #(s): | |||||||||||||
| Created: | June 3, 2016 | Updated: | June 8, 2016 | ||||||||||||
| Description: | From the Mageia advisory: The pgpdump package has been updated to version 0.31, fixing a buffer overrun. | ||||||||||||||
| Alerts: |
| ||||||||||||||
php: integer overflow
| Package(s): | php | CVE #(s): | CVE-2016-5095 | ||||||||||||||||||||||||||||||||
| Created: | June 6, 2016 | Updated: | June 8, 2016 | ||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
An integer overflow in php_filter_full_special_chars was found, similar to CVE-2016-5094. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
php: two vulnerabilities
| Package(s): | php5 | CVE #(s): | CVE-2015-4116 CVE-2015-8873 | ||||||||||||||||||||||||
| Created: | June 8, 2016 | Updated: | June 8, 2016 | ||||||||||||||||||||||||
| Description: | From the CVE entries:
Use-after-free vulnerability in the spl_ptr_heap_insert function in ext/spl/spl_heap.c in PHP before 5.5.27 and 5.6.x before 5.6.11 allows remote attackers to execute arbitrary code by triggering a failed SplMinHeap::compare operation. (CVE-2015-4116) Stack consumption vulnerability in Zend/zend_exceptions.c in PHP before 5.4.44, 5.5.x before 5.5.28, and 5.6.x before 5.6.12 allows remote attackers to cause a denial of service (segmentation fault) via recursive method calls. (CVE-2015-8873) | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
puppet-agent: multiple vulnerabilities
| Package(s): | puppet-agent | CVE #(s): | CVE-2016-2785 CVE-2016-2786 | ||||
| Created: | June 6, 2016 | Updated: | June 8, 2016 | ||||
| Description: | From the Gentoo advisory:
Multiple vulnerabilities have been discovered in Puppet Server and Agent. Remote attackers, impersonating a trusted broker, could potentially execute arbitrary code. | ||||||
| Alerts: |
| ||||||
qemu: denial of service
| Package(s): | qemu | CVE #(s): | CVE-2016-5107 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 8, 2016 | Updated: | June 8, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Arch Linux advisory:
Quick Emulator(Qemu) built with the MegaRAID SAS 8708EM2 Host Bus Adapter emulation support is vulnerable to an out-of-bounds read access issue. It could occur while looking up MegaRAID Firmware Interface(MFI) command frames in 'megasas_lookup_frame' routine. A privileged user inside guest could use this flaw to read invalid memory leading to crash the Qemu process on the host. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
roundcubemail: cross-site scripting
| Package(s): | roundcubemail | CVE #(s): | CVE-2016-5103 | ||||||||||||||||||||||||
| Created: | June 6, 2016 | Updated: | January 2, 2017 | ||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
A 1.2.0 release of roundcubemail fixed an XSS vulnerability in href attribute on area tag. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
spice: two vulnerabilities
| Package(s): | spice | CVE #(s): | CVE-2016-0749 CVE-2016-2150 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 7, 2016 | Updated: | July 20, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
* A memory allocation flaw, leading to a heap-based buffer overflow, was found in spice's smartcard interaction, which runs under the QEMU-KVM context on the host. A user connecting to a guest VM using spice could potentially use this flaw to crash the QEMU-KVM process or execute arbitrary code with the privileges of the host's QEMU-KVM process. (CVE-2016-0749) * A memory access flaw was found in the way spice handled certain guests using crafted primary surface parameters. A user in a guest could use this flaw to read from and write to arbitrary memory locations on the host. (CVE-2016-2150) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
sudo: information leak
| Package(s): | sudo | CVE #(s): | |||||||||
| Created: | June 6, 2016 | Updated: | June 20, 2016 | ||||||||
| Description: | From the Red Hat bugzilla:
It was found that malicious user can leak some information about arbitrary files by providing arbitrary value for INPUTRC, since the target application parses the INPUTRC file with the target user's privileges. This kind of attack is in current version of readline limited to only timing attacks and leaks of line content matching a very particular format, but the next release will feature enhanced error reporting, making the disclosure more dangerous. It is also possible to cause segmentation fault through stack exhaustion in the target application by having INPUTRC specify a file with an $include directive for itself. RHEL and Fedora by default include INPUTRC in /etc/sudoers, exposing this issue to users of the default sudo configuration. INPUTRC should not be included in "env_keep" at all, or else somehow restricted to non-restricted shells (ie /bin/sh, /bin/bash). | ||||||||||
| Alerts: |
| ||||||||||
vlc: code execution
| Package(s): | vlc | CVE #(s): | CVE-2016-5108 | ||||||||||||||||||||||||
| Created: | June 8, 2016 | Updated: | January 17, 2017 | ||||||||||||||||||||||||
| Description: | From the Debian advisory:
Patrick Coleman discovered that missing input sanitising in the ADPCM decoder of the VLC media player may result in the execution of arbitrary code if a malformed media file is opened. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
xen: three vulnerabilities
| Package(s): | xen | CVE #(s): | CVE-2014-3672 CVE-2016-5106 CVE-2016-5105 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 6, 2016 | Updated: | June 8, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
When the libxl toolstack launches qemu for HVM guests, it pipes the output of stderr to a file in /var/log/xen. This output is not rate-limited in any way. The guest can easily cause qemu to print messages to stderr, causing this file to become arbitrarily large. The disk containing the logfile can be exhausted, possibly causing a denial-of-service (DoS). (CVE-2014-3672) Quick Emulator(Qemu) built with the MegaRAID SAS 8708EM2 Host Bus Adapter emulation support is vulnerable to an out-of-bounds write access issue. It could occur while processing MegaRAID Firmware Interface(MFI) command to set controller properties in 'megasas_dcmd_set_properties'. A privileged user inside guest could use this flaw to crash the Qemu process on the host resulting in DoS. (CVE-2016-5106) Quick Emulator(Qemu) built with the MegaRAID SAS 8708EM2 Host Bus Adapter emulation support is vulnerable to an information leakage issue. It could occur while processing MegaRAID Firmware Interface(MFI) command to read device configuration in 'megasas_dcmd_cfg_read'. A privileged user inside guest could use this flaw to leak host memory bytes. (CVE-2016-5105) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Page editor: Jake Edge
Next page:
Kernel development>>