Mainly JavaCodeQuality management is divided into three aspects:
- Code style
- Static code analysis
- Unit Test
This is exactly three progressive layers: how the code looks-> how the code is analyzed-> how the code runs, java provides excellent tools and almost seamless eclipse integration in these aspects.
# Code style
This is mainly managed by eclipse and set in preference/Java/code style, including:
- Cleanup-automatically add/delete/modify code to better comply with code standards. You can right-click/source/cleanup to run the code.
- Code Template-template used to add code snippets or comments. You can insert comments using ALT + Shift + J.
- Formatter-automatically format the code. Unlike cleanup, it only changes the format and runs with Ctrl + Shift + F. If you want to organize the project, you can select the project folder/right-click/source/format to format the file. You can also use the Save action to automatically format the file when saving the file.
- Organize imports-automatically add import and automatically sort import. The shortcut key is Ctrl + Shift + O.
Eclipse's support for code styles is so powerful that it is difficult for you to write code that does not conform to the standard-for C ++, Visual Studio's support is much inferior.
# Static code analysis
-Check Style
Check style is based onSource codeFor analysis, the original version provided some code style checks, but later it gradually added features and checked some basic code design problems, such as the function parameter should be final, override functions are not required. They should be final functions.
Check style can be configured in preference/check style. If you want to disable some options, you need to copy your settings first. The checkstyle configuration dialog box can be used for configuration, it is also a good place to view the interpretation of each style:
Eclipse is used for compilation and debugging during development. However, ant is usually used to compile a final jar or war during release. Besides, if you need to run CI, ant compiling scripts are essential, and checkstyle also provides ant task:
< Target Name = "Checkstyle" Description = "Run check styles" > < Mkdir Dir = "$ {Checkstyle. dir }" /> < Checkstyle Config = "$ {Checkstyleinstall. dir}/sun_checks.xml" > < Fileset Dir = "$ {SRC. dir }" Includes = "**/*. Java" /> < Fileset Dir = "$ {Test. dir }" Includes = "**/*. Java" /> < Formatter Type = "Plain" /> < Formatter Type = "XML" Tofile = "$ {Checkstyle. dir}/checkstyle_errors.xml" /> </ Checkstyle > </ Target >
-Findbugs
Findbugs is based on bytecode analysis, but does not have the source code. It mainly analyzes the bug pattern in the Code (code writing that easily leads to bugs), its configuration, or reference, it can be found in preference/Java/findbugs/detector configuration, which mainly includes several different category: Malicious Code vulnerability, dodge code, bad practice, correctness, internationalization, performance, security, multithreaded correctness, experimental. In addition, findbugs uses the plugin architecture, allowing users to easily implant their own specific checks.
Findbugs also provides annotation, such as nonnull and checkfornull, for better checks through user cooperation.
In addition to eclipse plugin, findbugs also provides ant tasks:
< Target Name = "Findbugs" Depends = "Jar" > < Mkdir Dir = "$ {Findbugs. dir }" /> < Findbugs Home = "$ {Findbugs. Home }" Output = "XML" Outputfile = "$ {Findbugs. dir }\ interviewerportal-findbugs.xml" > < Sourcepath Path = "$ {SRC. dir }" /> < Class Location = "$ {Target. Jar. name }" /> </ Findbugs > </ Target >
# Unit test
JUnit is easy to use. You can use testfixture (@ before, @ After, @ beforeclass, @ afterclass) by knowing @ test and assert) write a call testcase, other detailed points, basic look at JUnit FAQ will be done: http://junit.sourceforge.net/doc/faq/faq.htm
You can also put it in ant:
< Path ID = "Classpath" > < Fileset File = "$ {Target. Jar. name }" /> </ Path > < Target Name = "JUnit" Depends = "Jar" > < Mkdir Dir = "$ {Report. dir }" /> < JUnit Printsummary = "Yes" > < Classpath > < Path RefID = "Classpath" /> </ Classpath > < Formatter Type = "XML" /> < Batchtest Todir = "$ {Report. dir }" > < Fileset Dir = "$ {Test. dir }" Includes = "**/* Test. Java" /> </ Batchtest > </ JUnit > </ Target > < Target Name = "Junitreport" Depends = "JUnit" > < Junitreport Todir = "$ {Report. dir }" > < Fileset Dir = "$ {Report. dir }" Includes = "Test-*. xml" /> < Report Todir = "$ {Report. dir }" /> </ Junitreport > </ Target >
In addition, JUnit is just a tool. Using good tools is one thing. Writing a good testcase is another thing. This small book is worth reading: unit test path, especially chapter 4 and chapter 5, describes how to design test cases.
Third, let's quote Kent Beck about the extent to which the test should be performed:
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that coshould just be hubris ). if I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. when coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.
different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. in the meantime, experimentation seems in order