Create script
Functional testing is a key part of software development-and bash loaded into Linux can help you easily complete functional testing. In this article, Angel Rivera describes how to use bash shell scripts to execute Linux applications by running commands. Program Function test. Because this script depends on the return code of the command line, you cannot apply this method to GUI applications.
Functional testing is a phase of the development cycle in which software applications are tested to ensure that software functions are processed correctly as expected.Code. This is usually done after the unit test of a single module is completed, before the system test of the entire product under load/heavy load conditions.
There are many test tools available on the market to help with functional testing. However, you need to obtain them first, and then install and configure them, which will take up your precious time and energy. Bash can help you avoid these troubles and speed up the testing process.
The bash shell script has the following advantages:
The bash shell script has been installed and configured in Linux. You don't have to spend any time preparing it.
You can use a text editor provided by Linux such as VI to create and modify bash shell scripts. You do not need to obtain specialized tools for creating test programs.
If you already know how to develop the Bourne or Korn shell script, it is sufficient to use the bash shell script. For you, the learning curve does not exist.
BASH Shell provides a large number of programming structures for developing very simple to moderately complex scripts.
Suggestions for porting scripts from KoRn to bash
If you already have existing Korn shell scripts and want to port them to bash, consider the following:
The "print" command of Korn cannot be used in Bash. Instead, the "Echo" command is used instead.
The first line of the script must be:
#! /Usr/bin/KSh
Modify:
#! /Bin/bash
Create a bash shell script for functional testing
These basic steps and suggestions apply to many client/server applications running on Linux.
Record the prerequisites and main steps for running the script
Divide operations into several logical groups
General plan-based implementation steps
Provide comments and instructions in each shell script
Make an initial backup to create a baseline
Check input parameters and Environment Variables
Try to provide "usuage" feedback
Try to provide a "quiet" Running Mode
When an error occurs, provide a function termination script.
If possible, provide functions that can execute a single task
Capture the output of each script when generating output is displayed.
Capture the return code of each command line in each script
Count the number of failed transactions
Highlight error messages in the output file to facilitate identification
If possible, "real-time" file generation
Provide feedback during Script Execution
Provides a summary of script execution.
Provides an easy-to-interpret output file
If possible, provides methods to clear scripts and return the baseline.
Each suggestion and script used to explain the problem are described in detail below. To download this script, see references later in this article.
1. Record the prerequisites and main steps for running the script
It is important to record, especially the main idea of recording functional tests in a single file (such as a "README-testing.txt") with a self-reported title, including, such as prerequisites, server and client settings, full (or detailed) steps followed by the script, how to check whether the script succeeds/fails, how to perform cleanup and restart tests.
2. Divide operations into several logical groups
If you only execute a small number of operations, you can put them all in a simple shell script.
However, if you need to perform a large number of operations, it is best to divide them into several logical sets, for example, put some server operations in one file and put client operations in another file. In this way, appropriate granularity is divided for testing and maintenance testing.
3. General plan-based implementation steps
Once you decide to group the operations, you need to consider the steps to perform the operations according to the general plan. This idea is to simulate the situation of end users in real life. As a general principle, you only need to test the 80% usage of the most frequently called function in 20%.
For example, assume that the application requires three test groups to be arranged in a specific order. Each test group can be placed in a file with a self-describing file name (if possible) and used numbers to help identify the order of each file, for example:
code: 1. fvt-setup-1: to perform initial setup.2. fvt-server-2: To perform server commands.3. fvt-client-3: To perform client commands.4. fvt-cleanup: to cleanup the temporary files, in order to prepare for the repetition of the above test cases.
4. provide comments and instructions in each shell script
it is a good coding habit to provide comments and instructions in the header file of each shell script. In this way, when another tester runs the script, the tester will be able to clearly understand the scope of the test in each script, all the prerequisites and warnings.
The following is an example of a bash script "test-bucket-1.
Code: #! /Bin/bash # Name: Test-bucket-1 # purpose: # performs the test-Bucket Number 1 for product X. # (actually, this is a sample shell script, # which invokes some system commands # To explain strate how to construct a bash script) # Notes: #1) the environment variable test_var must be set # (as an example ). #2) to invoke this shell script and redirect standard # output and standard error to a file (such as # test-bucket-1.out) do the following (the-s flag # is "silent mode" to avoid prompts to the user ):##. /test-bucket-1-S 2> & 1 | tee test-bucket-1.out # Return codes: #0 = All commands were successful #1 = at least one command failed, see the output file # and search for the keyword "error ". ######################################## #################
5. Make an initial backup to create a baseline
You may need to perform multiple functional tests. When you run it for the first time, you may find some errors in the script or process. Therefore, to avoid wasting a lot of time on re-creating the server environment from scratch-especially if the database is involved-you may want to make a backup before testing.
After running the function test, you can recover the server from the backup and prepare for the next test.
6. Check the input parameters and environment variables.
It is best to check the input parameters and whether the environment variables are set correctly. If there is a problem, display the cause and solution of the problem, and then terminate the script.
When the tester is preparing to run the script, and if the environment variables called by the script are not correctly set, but the script is terminated in time, the tester will be very grateful. No one liked to wait for the script to run for a long time but found that the variables were not correctly set.
Code: # Role # main routine for discovery the test bucket # specify caller = 'basename $ 0' # The caller namesilent = "no" # user wants promptslet "errorcounter = 0" # role # handle keyword parameters (flags ). # ---------------------------------- # for more sophisticated usage of getopt in Linux, # See the SAM Ples file:/usr/lib/getopt/parse. bashtemp = 'getopt HS $ * 'If [$?! = 0] Then ECHO "$ Caller: Unknown flag (s)" usagefi # note quotes around '$ temp': they are essential! Eval set -- "$ Temp" While true do case "$1" in-h) usage "help"; shift; # Help requested-S) silent = "yes "; shift; # prompt not needed --) shift; break; *) echo "internal error! "; Exit 1; esac done # outputs # The following environment variables must be set # returns if [-z" $ test_var "] Then ECHO" environment variable test_var is not set. "usagefi
The script is described as follows:
Use the statement caller = 'basename $ 0' to obtain the name of the running script. In this case, you do not need to hard encode the script name in the script. Therefore, when copying a script, using a new derived script can reduce the workload.
When the script is called, the statement temp = 'getopt HS $ * 'is used to obtain input variables (for example,-h indicates help, and-s indicates quiet mode ).
Statement [-z "$ X"] and echo "the environment variable X is not set. "and usage are used to check whether the string is null (-z). If it is null, run the echo statement to display the unset string and call the" usage "function to be discussed below.
If the script does not use a flag, you can use the variable "$ #" to return the number of variables being passed to the script.
7. Try to provide "usage" feedback
it is a good idea to use the "usage" statement in the script to explain how to use the script.
code: # -------------------------- # subroutine to echo the usage # ---------------------------- usage () {echo" Usage: $ caller [-H] [-S] "Echo" where: -H = help "Echo"-S = silent (No prompts) "Echo" Prerequisites: "Echo" * the environment variable test_var must be set, "Echo" * such: "Echo" Export test_var = 1 "Echo" $ Caller: exiting now with rc = 1. "Exit 1}
when calling a script, you can use the"-h "flag to call the" usage "statement as follows:
. /test-bucket-1-H
8. Try to use the "quiet" Running Mode
You may want the script to run in two modes:
In "verbose" mode (you may want to use this as the default value), you are prompted to enter a value, or simply press enter to continue running.
The user input data is not prompted in "silent" mode.
The following excerpt illustrates how to use the called flag "-s" to run the script in quiet mode:
Code: # Always # Everything seems OK, prompt for confirmation # specify if ["$ silent" = "yes"] Then response = "Y" else echo "the $ caller will be placed med. "Echo" do you wish to proceed [Y or N]? "Read response # Wait for response [-z" $ response "] & response =" N "fi case" $ response "in [YY] | [YY] [EE] | [YY] [EE] [ss]); *) echo "$ caller terminated with rc = 1. "Exit 1; esac
9. When an error occurs, provide a function termination script.
In case of a serious error, it is a good idea to provide a central function to terminate the running script. This function also provides additional instructions to guide what to do in this situation:
Code: # -------------------------------- # Subroutine to terminate abnormally # ---------------------------------- terminate () {echo "the execution of $ caller was not successful. "Echo" $ caller terminated, exiting now with rc = 1. "datetest = 'date' echo" End of testing at: $ datetest "Echo" "Exit 1}
10. If possible, provide functions that can execute simple tasks.
For example, do not use many long line commands, such:
Code: # Define echo "" Echo "creating access lists..." # ------------------------------------------------------ access-create-component development-login TED-authority plead-verbose if [$? -Ne 0] Then ECHO "error found in access-create-component development-login TED-authority plead" let "errorcounter = errorcounter + 1" Fi access-create-component development-Login pat-Authority General-verbose if [$? -Ne 0] Then ECHO "error found in access-create-component development-login pat-Authority General" let "errorcounter = errorcounter + 1" Fi access-create-component development-Login jim-Authority General-verbose if [$? -Ne 0] Then ECHO "error found in access-create-component development-login Jim-Authority General" let "errorcounter = errorcounter + 1" fi
...... Instead, create a function as shown below, which can also process the return code. If necessary, you can add an error counter:
Code: Createaccess () {access-create-component $1-login $2-Authority $3-verbose if [$? -Ne 0] Then ECHO "error found in access-create-component $1-login $2-Authority $3" let "errorcounter = errorcounter + 1" fi}
...... Then, you can call this function in a readable and scalable way:
Code: # Define echo "" Echo "creating access lists..." # ----------------------------------------------- createaccess development Ted projectleadcreateaccess development Pat generalcreateaccess development Jim General
11. Capture the output of each script when the output is being generated
If the script cannot automatically send the output to a file, you can use some BASH Shell functions to capture the output of the executed script, such:
./Test-bucket-1-S 2> & 1 | tee test-bucket-1.out
Let's analyze the above command:
"2> & 1" command:
Use "2> & 1" to redirect standard errors to standard output. String "2> & 1" indicates that any errors should be delivered to the standard output, that is, the file ID of 2 in Unix/Linux represents a standard error, and the file ID of 1 represents a standard output. If this string is not used, only the correct information is captured, and the error information is ignored.
Pipeline "|" and "Tee" commands:
Unix/Linux processes are similar to simple pipelines. In this case, you can create an MPS queue to use the output of the expected script as the input of the MPs queue. The next thing to decide is how to process the output content of the pipeline. In this case, we capture it into the output file, which is called "test-bucket-1.out" in this example ".
However, in addition to capturing the output results, we also want to monitor the output generated when the script is running. To achieve this, we connect the "Tee" (T-shaped pipe) that allows two tasks at the same time: place the output results in the file and display the output results on the screen. The pipeline is similar:
Code:Process --> T ---> output file | V Screen
If you only want to capture the output results and do not want to see the output results on the screen, you can ignore unnecessary pipelines:./test-bucket-1-S 2> & 1> test-bucket-1.out
In this case, similar pipelines are as follows:
Process --> output file
12. Capture the codes returned by each command line in each script
One way to determine whether the function test succeeds or fails is to calculate the number of failed lines of commands, that is, the return code is not 0. Variable "$? "Provides the return code of the recently called command; in the following example, it provides the return code for executing the" ls "command.
Code: # ----------------------------------------- # The commands are called in a subroutine # So that return code can be # checked for possible errors. # ----------------------------------------- listfile () {echo "ls-Al $1" ls-Al $1 if [$? -Ne 0] Then ECHO "error found in: ls-Al $1" let "errorcounter = errorcounter + 1" fi}
13. record the number of failed transactions
One way to determine whether a function is successful or fails is to calculate the number of line commands whose return value is not 0. However, in my personal experience, I am used to using only strings rather than integers in my bash shell script. The manual I have referenced does not clearly describe how to use integers, which is why I want to discuss how to use integers and calculate errors here (the command line fails) the reason for the number is as follows:
First, initialize the counter variables as follows:
Let "errorcounter = 0"
Then, issue the line command and use $? Variable capture return code. If the return code is not 0, the counter is increased by 1 (see the blue bold statement ):
Code:Listfile () {echo "ls-Al $1" ls-Al $1 if [$? -Ne 0] Then ECHO "error found in: ls-Al $1" let "errorcounter = errorcounter + 1" fi}
By the way, like other variables, you can use "Echo" to display integer variables.
14. Highlight the error message for easy identification in the output file
When an error (or a failed transaction) occurs, it is recommended to identify the error in addition to the increase in the number of error counters. Ideally, a string has a substring such as error or similar to it (see the blue-bold statement ), this substring allows the tester to quickly find errors in the output file. This output file may be large, and it is important to locate errors quickly.
code: listfile () {echo" ls-Al $1 "ls-Al $1 if [$? -Ne 0] Then ECHO "error found in: ls-Al $1" let "errorcounter = errorcounter + 1" fi}
15. if possible, "real-time" file generation
in some cases, it is necessary to process the files used by the application. You can use existing files or add statements in the script to create files. If the file to be used is long, it is best to use it as an independent entity. If the file is small and its content is simple or unrelated (the important point is that the text file does not take its content into account), you can decide to "create these temporary files in real time.
The following code shows how to "create a temporary file in Real Time:
Code:CD $ home/fvtecho "creating file softtar. c" Echo "Subject: This is softtar. c"> softtar. cecho "this is line 2 of the file"> softtar. c
The first echo statement uses a single> force to create a new file. The second echo statement uses two> to append data to the end of an existing file. By the way, if the file does not exist, a file will be created.
16. Provide feedback during Script Execution
It is best to include the echo statement in the script to indicate the progress of the logic it executes. You can add some statements that can quickly indicate the output target.
If the script takes some time to execute, it may be necessary to print the time at the beginning and end of the script execution. In this way, the time consumed can be calculated.
In the script sample, some echo statements that provide progress descriptions are as follows:
Code:# ---------------------------------------- Echo "Subject: product X, fvt testing" datetest = 'date' echo "begin testing at: $ datetest" Echo "" Echo "testcase: $ caller "Echo" "# -------------------------------------------- # ------------------------------------------ echo" "Echo" listing files... "# ------------------------------------------ # the following file shocould be listed: listfile $ home /. profile... # ------------------------------------------ echo "" Echo "creating file 1 "#--------------------------------------------
17. Provide a summary of Script Execution
If you are calculating the number of errors or failed transactions, it is best to indicate whether there are errors. This method allows the tester to quickly identify the error at the end of the output file.
In the following script example, the code statement provides the execution summary of the above script:
Code: # -------------- # Exit # -------------- if [$ errorcounter-Ne 0] Then ECHO "" Echo "*** $ errorcounter errors found during ***" Echo "*** the execution this test case. * ** "terminateelse echo" "Echo" *** yeah! No errors were found during *** "Echo" *** the execution of this test case. Yeah! * ** "Fi echo" "Echo" $ caller complete. "Echo" "datetest = 'date' echo" End of testing at: $ datetest "Echo" "Exit 0 # End of File
18. Provide an output file that is easy to interpret
It is very useful to provide some key information in the actual output generated by the script. In this way, the tester can easily determine whether the file being viewed is related to what he has done and whether the file is currently generated. The additional time stamp is very important for the current status. The summary report is helpful for determining whether an error exists. If an error exists, the tester must search for the specified keyword, such as error, and identify individual failed transactions.
The following is a sample of the output file:
Code: Subject: cmvc 2.3.1, fvt testing, common, Part 1 begin testing at: Tue Apr 18 12:50:55 EDT 2000 database: DB2 family: cmpc3db2 testcase: fvt-common-1 creating users... user Pat was created successfully .... well done! No errors were found during the execution of this test case :) fvt-common-1 complete. End of testing at: Tue Apr 18 12:56:33 EDT 2000
When an error occurs, the final part of the output file is shown as follows:
Code: Error found in report-view defectview *** 1 errors found during the execution of this test case. * ** the populate action for the cmvc family was not successful. recreating the family may be necessary before running fvt-client-3 again, that is, you must use 'rmdb', 'rmfamily ', 'mkfamily' and 'mkdb-d', then issue: fvt-common-1 and optionally, fvt-server-2. fvt-client-3 terminated, exiting now with rc = 1. end of testing at: Wed Jan 24 17:06:06 est 2001
19. If possible, provide methods to clear scripts and return the baseline.
The test script can generate temporary files. If so, it is best to allow the script to delete all temporary files. This avoids errors caused by the tester's failure to delete all temporary files. Worse, the tester deletes the files as temporary files.
Run the bash shell script for functional testing.
This section describes how to use a bash shell script to test functions. Assume that you have performed the steps described in the previous section.
Set necessary environment variables
Specify the following environment variables in. profile or manually as needed. This variable is used to describe how to process the environment variables in the script. The verification of the required environment variables must be defined before the script is executed.
Export test_var = 1
Copy the bash shell script to the correct directory.
BASH Shell scripts and related files need to be copied to the directory structure of the user identity for functional testing.
Log on to an account. You should be in the main directory. Assume it is/home/tester.
Create directory for test case: mkdir fvt
Copy the bash shell script and related files. Obtain the compressed file (see references) and put it under $ home. Decompress it as follows: unzip trfvtbash.zip
To execute this file, change the File Permission: chmod U + x *
Change the name to remove the file Suffix: MV test-bucket-1.bash test-bucket-1
Run scripts
Perform the following steps to run the script:
Log On As the tester's user identity
change the directory to the copied script location: CD $ home/fvt
run the script from $ home/fvt :. /test-bucket-1-S 2> & 1 | tee test-bucket-1.out
take a look at the end of the output file "test-bucket-1.out" and view the conclusion of the summary report.