The bug parade meets the zombies
His 8am keynote was much better attended than speaker Gary McGraw expected. In fact, given the evening festivities that normally occur at conferences, he wouldn't have shown up for a talk that early—except that he was giving it, he said with a laugh. But the keynote was well worth it for those who did arise early; it was a highly animated, somewhat scattered—though often amusing—look at some of the history of software security, problems within the industry, and some initiatives that may help produce better code in the future. He gave his talk on September 19 at AppSec USA in Denver.
McGraw is an author and security researcher. He started out as "a software guy", not really "a security guy", but got into security "because software sucks". He studied philosophy in college, then got a PhD in cognitive science under Douglas Hofstadter in Indiana. He is now the CTO of the software-security consulting firm Cigital.
Some background
When Java was released, it was claimed to be a "secure" language. He and Edward Felten started thinking about what that meant—and whether it was true. That led to the 1996 book they co-authored: Java Security. Java's security was broken back in that time frame, McGraw said, and it has broken recently as well. The new problems look much like the problems that he and Felten identified.
"We need to do a better job", he said. To that end, he wrote Building Secure Software in 2000. It came about because he started thinking about the problems with Java and how two smart minds, Guy Steele and Bill Joy, got tripped up. He wondered where lesser minds could go to learn about secure programming. The "answer was 'nowhere'", so "we wrote the book".
Going way back, computers were kept in a separate room and programmers had to "genuflect to get access" to them. In those days, "computers were expensive and people were cheap", he said; "my, how things have changed". It is the opposite now: computers are cheap and people are expensive.
Much of the software in use today was created by and for "geeks". But now all that software is "used by normals, not geeks". The normals don't care about security, the cloud, mobile vs. desktop, etc.; they just want to be able to "get their stuff done", which includes things like shopping, banking, and Facebook, McGraw said.
In his view, there are too many system and network administrators and too few "software people" involved in security. He suggested that anyone who wants to work in security "learn to code". There is a trend toward talking about "application security", rather than software security; he is not a fan of the former term. It was coined by someone walking up the layers of the OSI networking model until they found one that sounded good, he said. It is a network-centric, rather than a software-centric, view of security.
In the early 2000s, network administrators were convinced they had secure networks, except for those pesky users—especially "users with compilers". So, those administrators referred any security problems to the developers. The developers hated the network administrators who were tasked with security, though, because their requirements got in the way of developing code. What's needed is someone in the middle, McGraw said.
There need to be "people whose job it is to do software security". Right now, "nobody" gets blamed when security goes awry, because no one is tasked with doing it. "Who gets the credit when things go right?", he asked. But, "nobody", as suggested by the audience, was not the answer he was looking for on that question: "the CEO, of course", he said with a chuckle, "it's capitalism 101". More seriously, though, "there should be somebody that gets fired when software security goes badly". That team should have a "large budget and lots of people".
The bug parade
One of the issues for the software-security industry is "the bug parade", McGraw said. There are two types of security problems: bugs in the implementation and flaws in the design. It is "way easier" to deal with bugs than to handle the design flaws, so the industry tends to pretend that "it is all bugs". It also pretends that static analysis tools will find all of the problems. McGraw wrote one of the first static analysis tools, and now there are "lots of good ones"; he recommended using those tools, but "they won't solve the problem".
The number one bug, which accounted for more than half of the computer emergency response team (CERT) alerts in the 1990s, is buffer overflows. He had a code snippet in his slides [PPT]:
char x[12];
x[12] = '\0';
He asked what was wrong with that code and noted that the main problem is
that it is hard to explain to someone that the twelfth element is actually
indexed using 11. "How do you teach kids to count?", he asked. The
decision to start indexing from zero was made for
efficiency reasons; "C is left over from the days where computers were expensive
and people were cheap".
Even "the bible" (the K&R C book) shows an example of "how not to get input", McGraw said. It has the equivalent of the following code:
void main() {
char buf[1024];
gets(buf);
}
That code helpfully puts the buffer on the stack and doesn't mention that an
attacker can provide more than 1024 bytes of input. It is a recipe for a
buffer overflow.
He then went into a bit of a language rant: "C sucks, C is bad, but C++ is worse". C++ is a "pile of stinking poo". There are hundreds of things you can do wrong in C, but C++ goes way beyond that: "don't use it, if possible". Interestingly (or, perhaps, tellingly), he did not really suggest a particular language to use, though C, C++, Java, and others were scorned throughout.
He then asked the Java programmers in the audience if they knew about "re-entrant code". Evidently not liking the response, he went into a short description of race conditions. Those are a big problem today and will only get worse with more (and larger) multi-core systems. They are "way more important than all the stupid web bugs", he said.
He also pointed to a long list of Java language bugs that were fixed 1996–2000, "but they're back". The Java sandbox has proved not to be the security barrier that its designers envisioned. In addition, trying to do static analysis on languages with dynamic binding is "ugly".
In general, we have a problem with dynamic languages and "we haven't even thought it through yet", McGraw said. JavaScript code is a moving target that resists analysis. Some languages, such as Clojure, are "doing it right", but others, such as Ruby, are doing it wrong.
He then turned to two of the biggest web application bugs: SQL injection and cross-site scripting (XSS). Both appear near the top of the OWASP Top 10—produced yearly by the Open Web Application Security Project (OWASP), which also organizes AppSec—but lists like that need to be applied sensibly. He told a (possibly apocryphal) story of an analyst who confidently told the customer that their code was safe from the #1 bug in the OWASP Top 10 (SQL injection); the customer replied that there was no database being used.
XSS is an example of a problem where rethinking is required. Anyone can find XSS flaws with various tools and fix them one at a time, he said. But Google rewrote the API for its web applications so that developers essentially can't do the wrong thing. XSS flaws have dropped to zero and have stayed there since that change.
The key is to "find the easy bugs now" and to fix them. It's a waste of time to argue about how to find them (i.e. different styles of analysis tools), rather than spending the time to fix them. "We have got to fix what we find", McGraw said.
The division between implementation bugs and design flaws is roughly 50/50 (though he liked the answer he once got from an audience member: 70% bugs and 70% flaws, which is the ratio he adopted for the rest of the talk). We are finding more bugs today, which is a good thing, but it is because we are looking for them. We are "standing under the light" finding the bugs there, but ignoring the "darkness over there", which is where the design problems live. If we are going to solve the security problem, we are going to have to start "talking about the other half of the problem".
To that end, McGraw has been working with other security researchers as part of the IEEE Center for Secure Design. Representatives from multiple companies (Google, Twitter, Intel, McAfee, ...) brought various design flaws they had faced to the group. From those, they came up with "Avoiding the Top 10 Security Flaws", which is "very high-level advice" designed for architects. It is also the first IEEE document released under a Creative Commons license (CC BY-SA), he said. "Yes, I made IEEE put something out under CC".
Software security zombies
McGraw then presented some of his "zombies". Normally zombies are bad, but "in my case, zombies are good". His zombies "eat your brain and live forever". They are a collection of "obvious ideas" that need to be spread more widely.
The software-security industry is growing at 20% annually, which is twice as fast as the (much larger) computer-security industry. But some industries are just getting started: retail, for example, he said with a chuckle. Unfortunately, retail is "doing it wrong" by hiring people from the government, which is five years behind the rest of the industry. He implored those present to help spread the ideas: "it is up to us to repeat the obvious to people who don't know it yet".
The first of his zombies is that "old school security" is reactive, which does not work. The idea behind a firewall (i.e. to put "something between the bad and good") requires a perimeter, which no longer exists in today's networks. The "penetrate and patch" strategy is flawed. Waiting until a product is "finished" to test it for security problems is way too late.
There is also too much weight placed on penetration testing ("pentesting"), he said, and repeated it three times. Pentesting is important and should be done, but "if that's all you do, you are an idiot". He noted that the standard approach is to hire a pentesting firm, which then only reports three of the five bugs it found. From those reported, one bug is fixed, one is partially fixed, and the other is ignored—then "declare victory".
Over-reliance on security functions like cryptography is another problem. "We use SSL" does not mean there are no problems with the security of the system. There is no "magic crypto fairy dust" that can be sprinkled on the product as the last thing. Security is a property, like quality or reliability, that has to be built into the product.
Another zombie is "the more code you have, the more bugs you have". Companies are producing more code, which means they will have more bugs. Even though the defect density is dropping, "which is fantastic", the rate at which it is dropping is not keeping up with the rate at which new code is being created.
Integrating security into the software development life cycle (SDLC) is another zombie. First off, anyone who thinks they work somewhere with one SDLC works "somewhere with more dogs than people", McGraw said. That is why it is important for any recommendations to work with all SDLCs. He is evidently not a fan of agile methodologies, calling them "extremely bad programming, fast!", but it is a waste of time to argue about development methodologies, he said. Instead, ensure that security best practices can be applied to whatever is being used.
There is no "badness-ometer" for security; you cannot test something into being secure. Management would love some kind of meter that would indicate that a product is secure, but it doesn't exist. Given that the halting problem means we cannot even determine if a particular program will stop, there is no hope for the badness-ometer, he said.
The final "bonus zombie" is to "fix the dang software". Security people should be fixing the problems that they find. If your division is only in the business of finding problems, "everyone will want to eliminate that organization", he said. Throwing rocks is easy, but security organizations also need to help fix the broken code.
BSIMM
McGraw wrapped up his talk with a brief description of the "Building Security In Maturity Model" (BSIMM), which is a study of software-security initiatives at multiple companies. The idea is to gather data from these companies and to build a model to describe the data. The fifth iteration of the study involved some 67 companies, which had roughly 3000 security people trying to control the work of 272,000 developers. Those companies "may all be doing it wrong", he said, but they are "doing it the same way".
The idea is to be descriptive of what was observed at the companies. The team "went into different jungles", where they "observed monkeys eating bananas". "Are bananas good?", he asked. The answer is "we don't know", but it was observed in 67 jungles. The average size of a software security group is 1.4% of the total number of developers. Is that the right number? Again, no one knows, but it is what 67 companies do.
If you look at the companies involved in the study and "don't want to do it like them", then BSIMM will not be helpful, he said. But it does provide measures of various security-related things that other companies can use to judge their own practices. In order to improve, there must be some kind of measurement. BSIMM is one measuring stick that may be helpful. For more information, he recommended an article he wrote for SearchSecurity.
In passing, he noted that there were no government contractors or agencies involved in BSIMM. That's because the government is five years behind the industry. It is "good on offense", he said, but not on defense in the software-security realm.
Those interested in finding out more about all of the topics he discussed should check out his monthly column at SearchSecurity, as well as his Silver Bullet podcast. Also, the software security book series contains three separate books, including his Software Security, which provide lots of useful information. He suggested that people should read his book, even if they get it via BitTorrent. "I don't care", he said, even if everyone in the room bought it, "I get like $6", he said with a laugh.
McGraw's closing statement was that there are "big issues to grapple with" in software security. If we are going to succeed, we "need to do it together". His efforts with entities like the Center for Secure Design and the BSIMM are evidence that he is practicing what he is preaching. The bigger question may be whether developers and companies are listening to his sermons.
| Index entries for this article | |
|---|---|
| Security | Keynotes |
| Conference | AppSec USA/2014 |