Iv. internal Plug-in introduction 1, Attrib tag, used to filter use cases
In many cases, the use case can be run at different levels, adding this functionality in nose, and using attrib to divide the use case
There are two ways of doing this:
EF test_big_download (): Import urllib # commence slowness ... = 1
At run time, execute in the following manner:
' !slow '
This is not a good way to use, the other way is simpler
from Import attr@attr (speed ='slow') def test_big_download (): Import urllib # commence slowness ...
When the use case is run, it only needs to be joined at runtime, as follows:
$ nosetests-a Speed=slow
In a real project, properties can have multiple, when used:
Nosetests-a priority=2,status=stable
Nose property Plus expression of the use case in the actual project run less, here is not described, need to be on the official view: https://nose.readthedocs.io/en/latest/plugins/attrib.html
2. Capture: Get the standard output during the test
The plug-in is mainly used to pave the test process of the standard output, this parameter in the actual process of the project is relatively small, using the following:
-S,--nocapture not captured
3. Collect: Quickly collect test case names that need to be tested
This feature is typically used in conjunction with the TestID plugin (described later), primarily to list the IDs and names that need to be tested, using the following:
E:\workspace\nosetest_lear\test_case>nosetests-v test_case_0001--collect-onlytest_case_0001.test_learn_1 Oktest_case_0001.test_lean_2 ... ok--------------------------------------------------------------------- - in 0.004sOK
4, Logcapture: In the test process to obtain the log
Plug the use of high frequency, in the test process to locate the problem to use the log, the module can be configured to store and display the log, there are several options
-- nologcapture do not use log --logging-format= FORMAT display logs in a custom format --logging-datefmt=format
Similar to the above class, with more date formats
--logging-filter= FILTER log filtering, generally rarely used, can not pay attention to --logging-clear- handlers can also not pay attention to --logging-level= defaultlog Hierarchy definition
For example, a log configuration file such as the following:
#logging.conf[Loggers]keys=Root,nose,boto[handlers]keys=Consolehandler,rotatefilehandler[formatters]keys=Simpleformatter[formatter_simpleformatter]format=% (asctime) s [% (levelname) s]% (filename) s[line:% (lineno) d]%(message) S[handler_consolehandler]class=Streamhandlerlevel=Debugformatter=Simpleformatterargs=(Sys.stdout,) [Handler_rotatefilehandler]class=handlers. Rotatingfilehandlerlevel=Debugformatter=Simpleformatterargs=(' f:/Test_log.log','a', 200000, 9) [Logger_root]level=debughandlers=Consolehandler,rotatefilehandler[logger_nose]level=debughandlers=Consolehandler,rotatefilehandlerqualname=nosepropagate=0[logger_boto]level=errorhandlers=Consolehandler,rotatefilehandlerqualname=botopropagate=0
Just use the following method when you use it:
--logging-config=logging.conf
5, TestID: In the output file to add TestID display
This is very simple to use, as follows:
--with-id # 1 tests.test_a ... ok # 2 tests.test_b ... ok # 3 Tests.test_c ... ok
Here,-V is the output use case name
With the ID, you can customize the test case to run by ID, using the following:
% nosetests-v--with-id 2#2 tests.test_b ... ok
% nosetests-v--with-id 2 3#2 tests.test_b ... ok#3 tests.test_c ... ok
Run the use case for the last run failure:
Useful for a plug-in that only runs the last test failure of the use case, using the following method:
First run, add--failed parameter
% NOSETESTS-V--failed#1 test.test_a ... ok#2 test.test_b ... ERROR#3 Test.test_c ... FAILED#4 test.test_d ... ok
The second run, or the--failed parameter, only runs the wrong use case:
% NOSETESTS-V--failed#2 test.test_b ... ERROR#3 Test.test_c ... FAILED
When all the use cases run through, running all the use cases again
6, Xunit: Output results in Xunit format
The plug-in is used primarily for use in continuous set (Jenkins), to set the output in Jenkis to "Publish JUnit test Result report" and enter the file name
After the build, you will enter the report you want. There will be an article in detail later.
Python Nose test Framework Overview Four