Reprinted from Http://www.williamlong.info/archives/3272.html and http://www.ibm.com/developerworks/cn/rational/11- proven-practices-for-peer-review/
Code Review is a common tool in software development, and it is easier to find and structure and time-related problems than QA testing, and it can also help team members improve their programming skills and unify the programming style.
1. Code review requires the team to have a good culture
The team needs to recognize that code reviews are designed to improve the ability of the entire team, rather than checking "levels" for individual settings.
"A's code has a bug found by B, so a ability is not good, b ability is better", this kind of traps can easily be spread to affect the collaboration within the team, so need to avoid.
In addition, the code review itself can improve the developer's ability to learn from the mistakes they have made and learn from the others ' ideas. If the developer has a conflict or aversion to this process, the goal will not be reached.
2. Prudent use of the detection rate as an evaluation criterion for the issue under review
If a problem is found in code review, it is a good thing for the person who found the problem and should be encouraged. But for those who are found, we do not advocate the use of this method of punishment. Bugs are unavoidable in software development, and excessive demanding is inherently counterintuitive. Worse, the code review has no value and meaning if the participants are afraid of taking responsibility and are reluctant to point out the issue in the review.
3. Control the number of codes per review
According to a survey conducted by Smartbear in Cisco, the code for each review of 200 lines-400 lines works best. Each time you try to review too much code, the ability to find the problem drops, as shown in the specific proportional relationship:
In practice, we find that the optimal code review varies with the development platform and the language of development. But it is really necessary to limit the amount of each review because it is a high-intensity brain-intensive activity. For a long time, the code in the censor's eyes is just letters, without any logical connection, naturally there will not be too much output.
4. Carry out a review with questions
In each code review, we asked reviewers to use their experience to think about the problems they might encounter and then verify that they were resolved by reviewing the work. One trick is to verify that the usage scenario works correctly in code reading, starting with the user-visible functionality and assuming a more complex usage scenario.
Using this technique, you can give the reviewer a sense of being in, and actually immerse yourself in the code to improve efficiency. We all know that the martial arts fiction is not easy to doze off, while reading professional books easy to doze, the reason is that the martial arts fiction more prone to generation sense.
Some studies suggest that each goal should be set to control the amount of code audited within a unit of time. This method in our practice appears to be very mechanical and process, as the above method effect is good.
5. All questions and modifications must be confirmed by the original author
If a problem is found in the review, it must be confirmed by the original author.
This is done for two purposes:
(1) Confirm that the problem exists and that the problem is resolved
(2) Let the original author understand the problems and deficiencies to help them grow
Sometimes, for the sake of efficiency, experienced reviewers are more inclined to modify the code directly or even refactor all the code, but this is not conducive to improving team efficiency, and will increase the chances of refactoring introducing new bugs, which we usually don't encourage.
6. Use code review to activate individual "activism"
It's a good idea to extract some of the key parts, even if the project schedule is tight and you can't fully review the code, or at least part of the code.
The logic behind this is that software development is very creative work, and developers have strong self-drive and self-fulfilling requirements. Letting the developer know that any code he writes could be read and examined by others, prompting the developer to focus, especially to avoid submitting code that is bad quality or even low-level errors to peer review. Open source software is also a good use of this mindset to improve code quality.
7. Code review in an informal, relaxed environment
As mentioned earlier, code review is a brain-intensive task. Participants need to do the work in a more relaxed environment. Therefore, we think that it is not good to conduct a review of the code in the form of a conference, as suggested in some practices, not only because long meetings are inefficient, but also because the controversies and reflections that may arise at meetings are not conducive to such complex work.
8. Self-Review before submitting the code, add a description of the code
All team members must conduct a review before submitting the code to other members for review. In addition to checking the correctness of the code, this self-correcting form of review can also be done as follows:
(1) Add comments to the code to explain the reasons behind this modification and facilitate review by others.
(2) Correcting the coding style, especially the naming of some key data structures and methods to improve the readability of the code.
(3) from the overall review of the design, whether the complete consideration of all scenarios. This stage can be well remedied if there are poorly thought-out scenarios for the design done before the implementation.
In practice, we have found that even if only the original author is reviewing the code, it can still improve the quality of the code well.
9. Record notes in the implementation can improve the problem detection rate very well
There are several advantages to having a member do a handy recording when coding, including commenting in the code, or documenting a simple personal document:
(1) Avoid omissions. Any issues that will be taken into account during the encoding are documented and re-examined during the review phase to confirm the resolution.
(2) According to the study, everyone is used to making some repetitive mistakes. This type of problem is recorded in the code and can be used as a basis for inspection at the time of the review.
(3) The incidence of such problems will be significantly reduced after repeated notes and similar problems found in the review
10. Use good tools for lightweight code review
"工欲善其事, its prerequisite". We are using the code hosting services provided by BitBucket.
Each team member develops the functionality independently and then submits the code to the reviewer in the form of pull request. Reviewers can easily read the code on the page, add comments, and so on, and the original author will automatically receive an email alert to discuss the comments.
Even if the team members are distributed in far apart, the tools provided by BitBucket can be used to review the code very well.
1. One-time review is less than 200–400 line code
The Cisco Code review study (seen in the toolbar) shows that developers have less than 200-400 lines of code (LOC) to review in order to be optimized for efficiency. Over this amount, the ability to search for defects will be reduced. At this rate, you can find 70-90% defects. In other words, if there are 10 defects, you can find 7 to 9 of them.
About Cisco Case Studies
After 10 months of surveillance, the study summed up a theory that, if appropriate, the lightweight code review work is as effective as the normative review, but the former will be faster (and more time-saving). The Lightweight code review takes an average of 6.5 hours less than the spec review and finds more errors (bugs).
In addition to confirming these theories, some new laws have been found in the study, and these laws are summed up in this paper.
Figure 1 depicts the relationship between the defect density and the number of review code lines, supporting the rule. defect Density is the number of errors (bugs) found in every 1000 lines of code. When the number of code lines is more than 200, the defect density drops sharply.
In this case, the defect density is the evaluation method of "judging validity". If two reviewers review the same code, and one of them finds more errors (bugs), then we think the reviewer is more efficient. Figure 1 shows the change in the efficiency of her search for bugs when we put more code in front of the reviewers. The result is reasonable because she may not spend a lot of time reviewing it, which inevitably makes it less efficient than it used to be.
Figure 1. When the number of lines of code exceeds 200, the defect density drops sharply and the defect density is almost 0 after 400.
Back to top of page
2. Target definition per hour below 300–500 LOC inspection rate
- Check Rate: How quickly can we review the code? It is usually evaluated in kLOC per hour (thousands of lines of code).
- defect Rate: How quickly can we find defects? It is usually evaluated by the number of defects per hour.
- defect Density: How many defects can we find in the quantitative code (not how many of them)? It is usually evaluated by the number of defects found in each kLOC.
It takes a while to review the code. Quick reviews are not always good. Our results show that the inspection rate is less than "300-500 loc/hours" and can be optimized. According to the decision made, the reviewer's inspection rate has changed significantly, even if it is a similar code developer, reviewer, file and review size, there are huge differences.
To find the optimal inspection rate, we compared the defect density to the rate at which reviewers checked the code. The results are once again in our expectation: If you do not spend enough time in the review, then you will not find too many defects. If the reviewer is under a lot of code pressure, then he will not put the same attention on every line of code. He cannot study all versions of changes at the same location.
So, how fast is it too fast? Figure 2 shows the answer: the server-side review rate of more than LOC per hour can reduce efficiency. And every hour at the speed of the LOC, you may have rolled up to such a speed that the reviewer may not have a look at the code at all.
Figure 2. Check efficiency drops when the number of reviews exceeds 500 lines of code
Back to top of page
3. Spend enough time to perform appropriate and slow reviews, but do not exceed 60-90 minutes never review a prototype code for more than 60-90 minutes
We should discuss, in order to get better results, should not be too quick to evaluate. But you should not spend too much time in one place. After about 60 minutes, the reviewer will feel tired and will not find any additional defects. This conclusion has been supported by a number of other studies. In fact, according to our common sense, when people engage in highly focused activities, the performance state decreases after 60-90 minutes. Given the limitations of the human body, the reviewer cannot review more than 300–600 lines of code before performance is reduced.
In turn, however, the review code takes no less than five minutes, even if the code is only one line. In general, a single line of code can also affect the entire system, so it is worthwhile to take five minutes to check the possible results of the change.
Back to top of page
4. Make sure the code developer has commented the source code before the review begins
The code developer may eliminate most of the flaws before the review begins, as we will see. If we need developers to double check their work, then the review may be done faster without compromising on the quality of the code. As far as we know, this particular idea has not been studied, so we tested it during the CISCO study.
Figure 3. Code developers prepare for the impact of defect density
code developers are prepared to say that code developers should comment on their source code before reviewing it. We invented the term to describe the specific behavioral patterns evaluated in the study, and about 15% of the reviews prevented code developers from doing so. Comments instruct reviewers to make changes, display the files that must be viewed first, and find out why each code change was made. These comments are not comments in the code, but just comments to other reviewers.
Our theory is that because the code developer needs to reconsider and explain the changes that have occurred during the annotation process, the code developer needs to identify many flaws before the review begins to make the review more effective. As a result, the review process will produce low-density defects because fewer errors (bugs) are left. Obviously, there are fewer flaws in the review of the code developer's preparation than the review that is not prepared by the code developer.
We have also considered a pessimistic theory to explain the low detection rate of errors (bugs). Is it true that when a code developer makes a comment, the reviewer is biased, or smug, so that it doesn't get as many bugs as possible? We randomly sampled 300 reviewers for the study, which confirmed that the reviewers were very careful in judging the code and that fewer bugs were found.
Back to top of page
5. Create quantifiable goals for code reviews and get the system so you can improve the process
With the project, you should decide on the goals of the code review process and how to evaluate efficiency issues. When you define a specific goal, you can decide whether the peer review really meets the results you need.
It is best to start with an externality system, such as "reduce support access by 20%" or "halve the percentage of defects introduced by development". This information gives you a better view of what the code can do from an external perspective, and you need a quantifiable evaluation tool rather than a vague goal of "fixing more bugs."
However, it will take some time before the external system shows the results. For example, the support access will not be affected until the new version is released and delivered to the customer's hands. So look at the internal process tools to find out how many flaws, flaws are the problem, and the developer's time spent on the review is clear. The most common internal system for code review is the inspection rate , defect rate , and defect density .
Consider a process that is automated or tightly controlled to bring you repeatable code, and humans are not good at remembering to start or stop timers. To get the best results, you'll want to use code review tools that automatically collect code so that the key code for process improvement is accurate.
To improve and improve your processes, you can collect code and assemble processes to see how the changes affect the results. Soon, you'll know what works best for your team.
Back to top of page
6. Use a checklist because it can greatly affect the results of code developers and reviewers the use of a checklist is very important to the reviewer, and if the code developer forgets a task, the reviewer may also forget.
We highly recommend that you use checklists to determine what you might forget, which is useful for both code developers and reviewers. Neglect is the hardest flaw to find; After all, it's hard to judge what's not there. Checklist is the best way to solve this problem because it reminds reviewers and code developers to take the time to think about things that might be forgotten. The checklist also reminds the code developer and reviewer that all errors (bugs) have been handled, that the software has passed the invalid value test, and that unit tests have been created.
Another useful concept is the personal checklist . Everyone usually makes 15-20 mistakes (bugs). If you notice some typical errors (bugs), then you can develop your own personal checklist (Personal software process,software Engineering Institute, and Capability maturity Model Integrated is also recommended for this practice approach). The reviewer will complete the task of deciding on a general error (bug). What you should do is to have a short checklist, which is usually something you can easily forget.
Once you start documenting your flaws in the checklist, you'll be less likely to make a mistake. The rules will keep appearing in your mind, and your error rate will drop. In this case we will find that it is recurring.
Tips:
If you want to get more specific information about the checklist, you can get a free copy of the book written by the developer of this code, the best Kept Secrets of Peer code Review, at www. Codereviewbook.com.
Back to top of page
7. Confirm that the defect has been repaired
Yes, this "best practice" seems to be without brains. If you have a review code to find all the problems with the bug, then repairing them becomes a logical thing! Many review code teams now do not have a good way to track the flaws found during the review and make sure that the error (bug) is actually fixed before the review is completed. It is very difficult to confirm the results in an email, or to review it in real time.
Remember that these errors (bugs) are not usually entered in the Rational Team Concert log because they were found before the code was released to QA. So, what is a good way for code to identify defects before sticking the "all-in-all" flag? We recommend using a good collaborative review software to integrate with Rational Team Concert to track the flaws found in the review. With the right tools, the reviewer can record the error (bug) and discuss it with the code developer if necessary. The code developer then fixes the problem and notifies the reviewer that the reviewer must confirm that each issue has been resolved. The tool should track the issues identified during the review and ensure that the review is not completed until all errors ( bugs) are reviewed by the reviewer (or as a separate work item to be resolved later). Work items can only be passed when the review is completed.
If you're really having trouble searching for bugs, make sure you've installed them all!
Now that you've learned the best practices for the Code review process , we'll discuss some of the social effects and how to manage them to get the best results.
Back to top of page
8. Develop a good code review cultural atmosphere, in such an atmosphere can actively review defects
Code reviews can play a bigger role in real team building than most other technologies we can see, but only when managers are able to communicate in a positive, upward, and skilful way. It's easy to think of defects as bad (after all, they are bugs in the code), but a bad flaw-checking attitude can ruin the entire team's efforts, not to mention that it destroys the error (bug) check process.
The point of Software code review is to eliminate as many defects as possible, regardless of who "caused" the error (bug).
Managers must build flaws that are positive in this view. After all, the existence of each flaw is a potential opportunity to improve the code, and the error (bug) review process is designed to make the code as perfect as possible. Every flaw that is discovered and solved is a defect that the customer will not see later, and also a problem that QA personnel do not have to spend time to solve.
The team needs to maintain the attitude that discovering flaws means that code developers and reviewers as a team to improve the quality of the product success. Rather than "The code developer has a flaw, and the reviewer is responsible for discovering it." It is more like an effective form of pairing programming.
The reviewer wants to show all the developers the opportunity to collect bad habits, learn new tricks, and expand functionality. Developers can learn from their errors (bugs), but only when they are wary of bugs. If the developer is afraid of finding a bug, the positive results will disappear.
If you are a junior developer, or a new member of a team, someone who finds a flaw means that your strong teammates are helping you grow into a qualified developer. This is faster than when you are programming alone and without specific feedback.
In order to maintain a positive view of defect checking, managers must promise that defect density will not go into performance reports. It is very effective to make such a commitment publicly. The developers will then know what they are going to do and protest against the administrators who openly undermine the rule.
Managers should never use error (bug) code as a basis for a negative performance review. They must be cautious and sensitive to the frustration and negative reactions of criticism, and keep reminding teams that finding bugs is a good thing.
Back to top of page
9. Beware of Big brother Effect
As a developer, you can automatically assume that "big Brother is looking at you" is true, even if the review system is automatically evaluated by the Review support tool. Did you take a long time to review the changes? Did your peers find a lot of errors (bugs) from your code? How will this affect your next performance evaluation?
Evaluation reports should not be used against developers, especially when faced with a pair of reviewers. This approach would seriously undermine the moral outlook.
System is very important for process evaluation, which in turn provides a basis for process improvement. But institutions can also be used to do bad things. If developers believe that systems are used against them, they are not only hostile to the process, but their attention may shift to changing systems rather than writing better code and becoming more efficient.
Administrators can do a lot of things to solve this problem. First and foremost, they must be wary of this and must make sure that the code developer is not under a lot of pressure, and that the "Big Brother" issue must be checked every time.
The system should be used to evaluate the efficiency of the process or the effect of the process change. Remember, in general, the most difficult code is handled by the most experienced developers. These codes, in turn, are most likely to be problematic and therefore the most difficult to check, and the most likely to find defects. Therefore, a large number of defects are likely to be caused by complexity, and by the chunking of the code, rather than by the ability of the code developer.
If the system does help an administrator find a problem, kicking someone out may create more problems. We recommend that administrators treat a group as a whole when addressing related issues. So it's best not to have a special meeting, because developers can be pressured to solve specific problems. Instead, you can resolve the issue through a weekly status meeting, or a normal program.
Administrators must continue to maintain a year in which searching for defects is a good thing, not bad, and the defect density is not tied to the developer's ability. Remember that for a team, defects, especially the number of defects introduced by team members, should not be avoided and should not be used as evaluation parameters for competencies.
Back to top of page
10. Review part of the code, even if you can't complete it to benefit from self-efficacy (ego Effect)
Imagine yourself sitting in front of the compiler, and the task is to fix a small error (bug). But you know that if you say "I'm done", your peers-or worse, your boss-will have to check your work. Will this change your development personality? So when you're working, it's generally more prudent before you declare that the code review is complete. So you'll be a better developer right away, because when someone is talking about you behind your back, you say, "His staff is very cautious, he's a really good developer," rather than "he's made a lot of stupid mistakes." When he said the work was done, it was actually a long way off. "
Self-efficacy (ego Effect) encourages developers to write better code because they know that others will see the code and work they have written. No one wants to be considered by others to make a primary mistake (bug). Ego Effect makes it more prudent for developers to review their work when they deliver it to others.
A good feature of Ego Effect is that no matter whether the reviewer is responsible for all the code changes or just performing a "point check", it works just like random drug tests. If your code has a one-third chance of being reviewed by a reviewer, it is still enough to motivate reviewers to work cautiously. If you have only one-tenth chance of being pumped in, then you may not be so diligent. You know you would say, "Ha, I seldom make mistakes."
When judging 20–33% 's code, it is possible to gain the most from the Ego Effect, and the Code for review 20% is certainly much stronger than the non-review.
Back to top of page
11. Use lightweight, tool-supported code reviews
Code reviews generally have major types and countless variables, and the guide can apply to any of them. However, to fully optimize the time the team spends on the review, we use the lightweight review process supported by the tool to get the best results. When searching for absence, it is effective and practical.
Standard, or heavyweight inspection has been popular for 30 years. It is no longer a valid form of review code. The average time spent on a heavyweight check is 9 hours per 200 lines of code. Despite its effectiveness, the rigorous process requires three to six participants and a series of tedious meetings to discuss specific details. Unfortunately, despite the tedious process, most companies do not have the qualifications to integrate programmers for lengthy meetings. In recent years, many development companies have completely abandoned meeting arrangements, paper-based code reading, and tedious collections of work, instead using new lightweight processes to liberate themselves from the weight of prescriptive meetings and old-fashioned heavyweight processes.
We use case studies in Cisco to determine the characteristics of lightweight technology compared to the specification process. The result is that a lightweight code review takes only one-fifth (or less) of the spec review, and the former can find more errors (bugs).
Although lightweight code reviews have many methods, such as real-time reviews and e-mail reviews, the most effective method of review is to use collaborative software tools to facilitate review, such as Smartbear's Codecollaborator (see Figure 4).
Figure 4. The Lightweight Code review tool used in Cisco research, Codecollaborator
Larger image of Figure 4
Codecollaborator is a unique code review tool that integrates with the ibm®rational Team Concert workflow. It integrates source code reviews with chat forms, freeing developers from the tedious activity of contacting comments and private code lines. When a programmer adds changes to a work item for review, the review is automatically created in Codecollaborator and the appropriate approver is assigned. Team members can annotate code directly, chat with code developers, collaborate on each issue, track bugs and fix bugs. The whole process does not require meetings, printing, or scheduling.
With the lightweight review process based on Rational team Concert and Codecollaborator, the team is able to conduct a more effective review and implement the code Review's vantage point.
[reprint] Code review