Preface
A blog has been written before
"Jenkins"--jenkins+jmeter timing test
Only one script was executed in the shell and the corresponding JTL file was generated in the previous blog
and analyze the data.
There are times when we may need to test a lot of scripts, but because each job will generate a workspace
Or is not conducive to the search, or not conducive to classification, or not conducive to the analysis of data, does not apply to create a lot of work space,
At this point we may need to configure multiple scripts to be tested in a job
Body
When executing multiple scripts, only the Execute shell and the Publish performance test result are slightly different
Other configuration and data analysis are not the same as before, here is no longer to repeat
① Configure execute shell
The explanation of some nouns is no longer elaborated here, do not quite understand can refer to the previous blog
⑴ the following two statements in the same format, the previous blog also introduced, the middle of the && connection
⑵ can also add two execute shell directly, a shell to write a statement, the same effect
② Configuration Publish performance test result
The key point here, step ① is to use the shell build, generate JTL files,
And we generate JTL files is just part of the test effort, and more importantly, analyzing the data
If the ② step here is inaccurate, the resulting icon data analysis is not what you want.
The contents of the ② step are very flexible, according to the location of the JTL file you produced, properly fill in
As shown above, **/* indicates that all JTL files in the workspace for this job have icon analysis
Here the * is a wildcard, which can be understood as we look for files in the Windows system *
This can be written directly to the exact filename, or you can use wildcards to analyze a range of JTL files
If you are using wildcard characters to analyze a range of files,
If the same file name exists, the hierarchy looks up the order, and subsequent files do not produce an analytic chart
For example: According to the first picture, if there is a BBBB.JTL file under the a/aa/
And I'm writing this in publish performance test result, **/*/*.JTL.
When the analysis chart is generated, the analysis chart of the A/AA/AAAA.JTL and a/aa/bbbb.jtl two files is generated,
However, this does not affect build tests, which generate A/AA/AAAA.JTL and B/BB/BBBB.JTL two files when generating files
Finally, there are issues to be noted:
If the full file name of the makefile is unchanged, each generated file is stacked together
Data analysis is equivalent to one more loop when recording a script
The most important thing to note is:
If the build fails because one or more request error rates are high,
Then the next build will still fail without affecting the data analysis
Conclusion
Every day is different.