Posted by merc in Vulnerability Research
on Mar 1st, 2011 | 0 comments
How do you find vulnerabilities in software? Here is a little bit about the tools and techniques I am familiar with.
Probably the most common technique to find vulnerabilities is fuzzing. Fuzzing simply means sending random or not so random data to software. The more random a fuzzer is the dumber it is. So called smart fuzzers try to follow protocols or file formats more closely. There is no right and wrong and usually a combination of both is useful in finding vulnerabilities. Dumber fuzzer tend to scratch more of the surface of the software while smarter fuzzers get deeper into the internals of the software.Â Code coverage determines how much of a program is actually being tested and can be used to measure the efficiency of a fuzzer. The target software is usually monitored for crashes and successes are logged along with the input that caused the crash. After a crash occurs it is up to the vulnerability researcher to debug the crash and determine if it can be exploited to be in fact a security vulnerability. Otherwise it is a reliability bug. Some common fuzzers are Peach
.Â It is also trivial to write your own custom fuzzers in most scripting languages. A good book and web site on fuzzing can be found at: http://www.fuzzing.org/
Multiple variations code analysis are used to search software for vulnerabilities. Static analysis means the program is being inspected when it is NOT running as opposed to dynamic analysis (when the program IS running). Source analysis looks at source code (aka. white-box) while binary analysis looks at the compiled binary code. Static code analysis is great because it is very easy to run the tools and get results. Source Analysis can also be integrated in the software build process. Static source code analysis happens in the Development phase of the SDLC therefore it has a great return on investment since fixing bug in development is a magnitude less expensive than fixing security issues once software has made it to production systems and is live. Common tools used for code analysis are: Coverity
Code review means taking a look at the source code and look for vulnerabilities in the software. The only tools required for the task are a human brain and at least one eye. For common vulnerabilities to what out for, take a look at the SANS Top 25
and OWASP Top 10
Just like when reviewing code, it is possible to reverse engineer compiled code and dissect it to identify vulnerabilities. It is not practical to do as a starting point but allows you to zoom in a particular area of code that is suspected of having vulnerabilities.Â Common tools used in reverse engineering are IDA Pro
and the Immunity Debugger
Threat modeling is an activity that occurs early in the SDLC therefore is a good way to eliminate security vulnerabilities before they even exist which also happens to be the most cost efficient time to fix issues. It is possible to threat model an application that already exists to identify vulnerabilities. Threat modeling is really helpful in identifying logical security issues that would otherwise be missed by other techniques. Threat modeling starts with the creation of data flow diagrams (DFD). The threat modeling process then takes you down the path of analyzing each component of the software carefully and identifying any spoofing, tampering, repudiation, information disclosure, denial of service, or elevation of priviledges (STRIDE) issues. A great tool for threat modeling is Microsoft’s SDL Threat Modeling Tool
It is possible to find security issues by testing software manually and poking around randomly or by being somewhat more methodological about it. Luck is probably the best asset in finding vulnerabilities manually.
What other techniques do you use to find vulnerabilities? What are your favorite tools?