One of the most controversial tasks that software development teams face in their product lifecycle is triage errors. For everyone involved in product development, it is a serious matter to determine the relative importance level of any given error (and then to determine the likelihood that the error will not be repaired in time before it is released).
Programmers, testers, architects, and project managers have different perspectives, and their respective triage decisions are based on some of the following dispersed factors:
After fixing, how much code must be regression tested.
How long it is to publish the project.
How many users will be affected by the change.
Whether the error prevented other problems from being tested or repaired.
I acknowledge that these are important considerations when triage functional errors in the product's functionality. However, these should not be considered when determining whether to fix a security error (that is, an error that could result in a security breach in the product). The classification of security errors must be objective and consistent. For an attacker, it does not matter whether you discovered a vulnerability the week before the code completion milestone, as an attacker would exploit the vulnerability equally.
This column describes the objective security error classification system ("error rating") that is used by the Microsoft internal product and online services teams, as required by the Security Development Lifecycle (SDL). This article also describes how you can incorporate this classification system into your own development environment using Microsoft Team Foundation Server 2010.
DREAD
Before I discuss the error ratings that exist within Microsoft, it is important to introduce Microsoft's early Security Error classification initiative: DREAD. DREAD is a mnemonic tool that represents:
Potentially destructive
Repeatability
Availability
Affected users
can be found
Anyone who logs a new security error assigns a value ranging from 1 to 10 for each DREAD parameter, 10 is the most serious, and 1 is the least serious. These values are then averaged to form an overall DREAD rating. For example, suppose a developer named Doug found a SQL blind attack vulnerability in his team's new WEB Application Management portal page. Doug might classify the vulnerability as shown in Figure 1.
Figure 11 Security vulnerability Classification for developers
5 All user pages areUnable to link to the affected page.
dread parameters |
rating |
basic principle |
potentially destructive |
|
reproducibility |
10 | " TD style= "BORDER:0;PADDING:0;" > can be reproduced every time.
available |
2 |
|
1 |
|
|
1 |
|
3.8 |
  |
The classification shown in Figure 1 seems quite simple and effective. But consider that Doug's tester colleague Tina may look at the same vulnerability in a completely different way, as shown in Figure 2.