Open-source software security
Open-source software security is the measure of assurance or guarantee in the freedom from danger and risk inherent to an open-source software system.
Implementation debate
Benefits
- Proprietary software forces the user to accept the level of security that the software vendor is willing to deliver and to accept the rate that patches and updates are released.
- It is assumed that any compiler that is used creates code that can be trusted, but it has been demonstrated by Ken Thompson that a compiler can be subverted using a compiler backdoor to create faulty executables that are unwittingly produced by a well-intentioned developer. With access to the source code for the compiler, the developer has at least the ability to discover if there is any mal-intention.
- Kerckhoffs' principle is based on the idea that an enemy can steal a secure military system and not be able to compromise the information. His ideas were the basis for many modern security practices, and followed that security through obscurity is a bad practice.
Drawbacks
- Simply making source code available does not guarantee review. An example of this occurring is when Marcus Ranum, an expert on security system design and implementation, released his first public firewall toolkit. At one time, there were over 2,000 sites using his toolkit, but only 10 people gave him any feedback or patches.
- Having a large amount of eyes reviewing code can "lull a user into a false sense of security". Having many users look at source code does not guarantee that security flaws will be found and fixed.
Metrics and models
Number of days between vulnerabilities
It is argued that a system is most vulnerable after a potential vulnerability is discovered, but before a patch is created. By measuring the number of days between the vulnerability and when the vulnerability is fixed, a basis can be determined on the security of the system. There are a few caveats to such an approach: not every vulnerability is equally bad, and fixing a lot of bugs quickly might not be better than only finding a few and taking a little bit longer to fix them, taking into account the operating system, or the effectiveness of the fix.Poisson process
The Poisson process can be used to measure the rates at which different people find security flaws between open and closed source software. The process can be broken down by the number of volunteers Nv and paid reviewers Np. The rates at which volunteers find a flaw is measured by λv and the rate that paid reviewers find a flaw is measured by λp. The expected time that a volunteer group is expected to find a flaw is 1/ and the expected time that a paid group is expected to find a flaw is 1/.Morningstar model
By comparing a large variety of open source and closed source projects a star system could be used to analyze the security of the project similar to how Morningstar, Inc. rates mutual funds. With a large enough data set, statistics could be used to measure the overall effectiveness of one group over the other. An example of such as system is as follows:- 1 Star: Many security vulnerabilities.
- 2 Stars: Reliability issues.
- 3 Stars: Follows best security practices.
- 4 Stars: Documented secure development process.
- 5 Stars: Passed independent security review.
Coverity scan
- Rung 0
- Rung 1
- Rung 2