Software Test proposal

Source: Internet
Author: User

1. Introduction 1.1. Objective

Test whether the various functional modules in the software meet user requirements and test for the existence of bugs. It is expected to achieve rapid system improvement and system improvement. In order to see as much software errors as possible before the software is put into productive operation.

1.2. Background

A In order to facilitate learning and work more busy workers and students to record information, avoid forgetting and there is a huge loss.

b The history of the development project, which lists the users and the organization or crowd that carried out the test of the project, experienced three stages before and after the project, the pre-design phase, then the development phase, and finally the software testing phase. Users of the project are targeted at some of the school's students who want to memorize words in their spare time, and the functional tests of the system are mainly tested by professional software testers.

1.3. Scope

The main test software functions to meet the needs of customers, performance is superior and the system has problems. Each module of the system is tested in detail and the results of the test are recorded, and the results of the test are analyzed and processed carefully. During the test, each functional module of the system is split and tested to each of the modules. Test all possible results, analyze the problems that exist during the test, and then submit the test records. Finally, the problem of the software and the performance of the test for a comprehensive analysis, and give records.

In the process of testing, it is necessary to make assumptions about each problem and to improve the system according to the requirements of the project function modules and users in the requirements reporting document. Lists all risks or unexpected events that may affect the design, development, or implementation of a test. Lists all constraints that may affect the design, development, or implementation of a test.

1.4. Definition

Information (information): about the meaning of words in the database, part of speech, the word itself, etc.

Management (Manage): Selection of Thesaurus at all levels

1.5. References

Lists the documents and materials to be consulted during the preparation of the plan and the test.

Number

Information name

Author

Date

Publishing units

1

Introduction and improvement of software testing

Zhang

2008.6

Tsinghua University Press

2

Fundamentals of Software Testing tutorial

Liu Jianyu

2007.3

University of Posts and Telecommunications press

Introduction and application of software test automation

Li Gang

2004.4

Mechanical Industry Press

2. Test content

The following table lists the test requirements and defines them in a prioritized order:

subsystem Name

Module name

Test point

Priority level

Description

Today's events

Voice input

Start recording

First action

End recording

Follow-up action to start recording

Save Recording

To end the recording of subsequent operations

Click on the Blank Space

No response.

Return

Exit the Voice input function

3. Test Rule 3.1. Entry criteria

The installation package can be used after installation.

3.2. Suspension/Exit Guidelines

Software system in the unit, integration, validation, system, installation, acceptance testing, found a first-level error (greater than or equal to 1), two-level error (greater than or equal to 2) Pause Test return development. Software system after unit, integration, confirmation, system, installation, acceptance test, respectively to the unit, integration, confirmation, system, installation, acceptance test stop standard. The software system passes the acceptance test, and has obtained the acceptance test conclusion. When a software project needs to be paused for adjustment, the test should pause and back up the paused point data. A software project that has significant estimates during its development life cycle, schedule deviations, suspension or termination, the test should be suspended or terminated, and a backup of paused or terminated point data

3.3. Test method

First, the functional modules are divided to clarify the functional testing of the personnel responsible situation. Next, test each module. Black box testing is also known as functional testing or data-driven testing, it is a known product should have the function of testing to detect whether each function can be used, in the test, the program as a black box can not be opened, without regard to the internal structure and internal characteristics of the program, the tester in the program Interface test, It only checks whether the program function is normally used in accordance with the requirements specification, whether the program can properly receive the input number saw and produce the correct output information, and maintain the integrity of external information (such as databases or files). The black box test method mainly includes equivalence class division, boundary value analysis, cause-fruit graph, error inference, etc., mainly used for software validation testing. Black-Box testing focuses on the external structure of the program, regardless of the internal logic structure, testing the software interface and software functions. The black box method is the exhaustive input test, in order to isolate all errors in the program by using all possible inputs as test cases. In fact, there are an infinite number of test cases where people are not only testing all legitimate inputs, but also testing those that are illegal but possible.

3.4. After the completion of the module test, the whole system of functional test test means

Path test (testing). A path contains all the steps that the tester performs, or all the statements that the program passes in order to get the correct state. Path testing involves testing many paths through the program. All paths through the non-trivial program are not possible. As a result, some testers perform sub-path tests (subpath testing) to test many parts of the path. 、

Statement and branch coverage (statement and branch coverages). If the test executes all the statements (or lines of code) in the program, it reaches 100% of the statement coverage. If all the statements and one statement are executed to all branches between the other statements, then 100% of the statement and branch coverage are reached. Design your own tests to achieve high statement and branch coverage, sometimes called "coverage-based testing (coverage-based testing)." (after reaching the coverage target, you can stop testing or stop designing more tests). It is called statement and branch coverage, which is to be distinguished from tests that focus on other types of coverage. Configuration coverage is a good example of a method that executes the same statement many times, but potentially produces very different results.

Configure coverage (configuration coverage). If you have to test 100 printing hunger compatibility, and have tested 10, you can reach 10% of the printer coverage. More generally, the configuration coverage metric that the tester has run (and the program has passed) is a percentage of the total number of configuration tests that the program runs.

Test based on the specification (specification-based testing). This test focuses on verifying each fact statement about the product that is made in the specification. (A fact statement is any statement that can be expressed either as true or false.) Often include manuals, market development documents, or advertisements, all statements in printed materials sent to customers by technical support personnel.

Requirements-based testing (requirements-based testing). The test focuses on proving that the program meets all the requirements in the requirements document (or focuses on the need to prove that a requirement is not met on a demand basis). )

Combination test (combination testing). Test two or more variables together. The final "Test Instruments Appendix" In this chapter also discusses this issue. Combinatorial testing is important, but many testers do not study this test very well.

3.5. Test Essentials

The main test system is functional to meet customer requirements, the degree of cohesion between each module is smooth, and testing software for defects and loopholes.

3.6. Test Tools
    1. Load pressure test tools

The main purpose of this kind of testing tool is to measure the scalability and performance of the application system, which is an automated testing tool to predict the behavior and performance of the system. In the implementation of the concurrent load process, through real-time performance monitoring to identify and find the problem, and for the identified problems to optimize the system performance to ensure the successful deployment of the application. The Load pressure test tool is able to test the entire enterprise architecture, and through these tests, the enterprise can minimize test time, optimize performance and accelerate the release cycle of the application system.

    1. Functional Testing Tools

By automatically recording, detecting and replaying the user's application operations, the measured system's output record is compared with the pre-given standard results, and the functional testing tool can effectively help testers to test the functions of different release versions of complex enterprise applications to improve the productivity and quality of testers. Its primary purpose is to detect if an application is capable of achieving the desired functionality and functioning properly.

    1. Test management Tools

Typically, test management tools manage test requirements, test plans, test cases, test implementations, and test management tools include tracking management of defects. Test management tools Enable testers, developers, or other IT staff to interact with each other in a central data warehouse.

4. Test Environment 4.1.  Hardware environment 1> Android Intelligent Machine 4.2. Software Environment

Android 4,0 above system

4.3. Security Environment Requirements

Security of the operating system, testing the security of the tool, testing the security of the software.

5. Project Tasks

The following are test-related tasks when testing a student information management system:

5.1. Test planning

1. Response Time

I identified the concept of "response time" as "the time required to respond to a request" and the response time as the main manifestation of software performance for the user's perspective. The response time is divided into two parts: render time and system response time.

2. Number of concurrent users

I distinguish between "concurrent users" and "simultaneous online numbers," The standard of my "concurrent users" is that the number of concurrent users depends on the target business scenario of the test object, so before determining the number of concurrent users, the user's business must first be decomposed, Analyze typical business scenarios (that is, the business operations that users use most often and most), and then obtain "concurrent users" based on scenarios that employ a number of methods (mathematical models and formulas that compute concurrent users).

The reason for this is: Suppose an application system, the peak of 500 people at the same time online, but these 500 people are not concurrent users, because it is assumed that at a point in time, 50% of the people are filling in complex forms (filling out the form action on the server without any burden, only in "Submit" When the action is the pressure on the server system, 40% of people continue to jump from one page to another page (keep sending requests and responses, generate server pressure), and 10% people hanging online, no operation in a daze:) (no pressure on the server action). So only 40% of the people really pressure the server, from the example here, the number of concurrent users is concerned about not but the number of business concurrent users, but also depends on the business logic, business scenarios. So we need the performance test document 4, 5, 6 of this article.

3. Throughput

I defined the throughput as "the number of customer requests handled by the system per unit of time", directly embodies the performance carrying capacity of the software system, for the interactive application system, the throughput reflects the pressure of the server, in the capacity planning test, throughput is an important indicator, it is not only reflected in the middleware, database, More on the hardware. We use this indicator in the following areas:

(1) To assist in the design of performance testing scenarios, to measure whether the performance test has reached the expected design goals, such as the connection pool of the Java EE Application System, the frequency of database transactions, the number of transactions occurred.

(2) To assist in the analysis of performance bottlenecks, referring to the second part of the overall RBI method.

4. Performance counters

Performance counters describe some data metrics for server or operating system performance, such as the number of memory used for Windows, CPU usage, process time, and so on, which are common counters.

For performance counters, this metric requires not only hardware counters, Web server counters, WebLogic server counters, servlet performance counters, EJB2 performance counters, JSF performance counters, and JMS performance counters. Finding these metrics is the first step in using performance counters, and the key is to find performance bottlenecks, determine system thresholds, and provide optimization recommendations that are key to performance counter usage. Performance counters are complex and diverse, and are closely related to the code context, system configuration, System architecture, development methods, specification implementations used, tools, and library versions.

5. Think Time

I set the think time to "sleep time". From the point of view of the business system, this time refers to the user in the wake of the operation, the interval between each request, from the point of view of automated testing, to the real test simulation user action, you must be in the test script to allow for a period of time between the actions, The script is the placement of a think function between the operations, reflected in the interval between the two request statements in the script, and different test tools that provide different functions or methods for thinking time, such as HP Loadruner and IBM Rational performance The way of tester is completely different.

5.2. Test design

User layer:

This is primarily a test for the final use of the product by the operator. The emphasis here is on the operator's point of view, the user support of the test system, the user interface of the normative, friendly, operable, and data security. Mainly includes: User manual, use Help, support customer's other product technical manual is correct, is easy to understand, whether humanization.

User interface Testing

In the case of ensuring that the user interface can be accessed through the Test object control or access, the test user interface style to meet user requirements, such as: whether the interface is beautiful, the interface is intuitive, the operation is friendly, whether humanized, easy to operate is good.

Maintainability Testing

Maintainability is the convenience of system software, hardware implementation and maintenance functions. The aim is to reduce the impact of maintenance functions on the normal operation of the system. For example, testing for features or tools that support remote maintenance systems.

Security testing

The security here mainly includes two parts: the security of data and the security of operation. Verify that only the specifications of the data can access the system, other non-conforming data is not able to access the system, verify that only the specification of the operating rights to access the system, other non-conforming operation rights can not access the system;

Application layer:

Testing for product engineering applications or industry applications. The emphasis is on the system application, simulating the actual application environment, and testing the compatibility, reliability and performance of the system.

System Performance Testing

Tests for the entire system, including concurrency performance tests, load tests, stress tests, strength tests, destructive tests. Concurrency performance testing is the performance process of evaluating system transactions or business processing bottlenecks in the context of incremental concurrency and the ability to receive business; The strength test is to identify errors caused by insufficient resources or resource contention in the case of low resource conditions; Destructive testing focuses on exceeding the system normal load n times, Error presence and rate of occurrence and incorrect resiliency.

System reliability and stability testing

In the long-term use environment of certain load, system reliability and stability.

System Compatibility test

The software in the system is compatible with various hardware devices, operating system compatibility, and supporting software compatibility.

System Networking Test

In the network environment, the system software supports the access device. Includes feature implementation and cluster performance.

System installation Upgrade Test

The purpose of the installation test is to ensure that the software is handled as expected when it is installed under normal and abnormal conditions. For example, under normal circumstances, a first installation or upgrade, a complete or a custom installation can be installed. Exceptions include insufficient disk space, missing directory creation permissions, and so on. A further objective is to verify that the software works immediately after installation. In addition, installation manuals, installation scripts, etc. also need to be concerned.

5.3. Test Execution Readiness

Failover and recovery testing ensures that the test object is successfully transferred and can recover data from various hardware, software, and network failures that result in unexpected data loss or data integrity breaks. Failover testing ensures that, in the event of a failure, the standby system will lose no chance to "replace" the failed system in order to avoid losing any data or transactions for systems that must be continuously running. Recovery testing is a confrontational test process. In this kind of test, the application or system will be put under extreme conditions (or under extreme conditions of simulation) to produce failures (such as device input/output (I/O) failure or invalid database pointers and keywords). It then invokes the recovery process and detects and examines the application and system to verify that the application or system and data have been properly recovered.

5.4. Test execution

1. Prerequisites ensure that the test project is functioning properly. This type of test is based on the black box technology, which interacts with the application through a graphical user interface (GUI) and analyzes the output or results of the interaction to verify the application and its internal processes, which is the current test focus.

Execute use Cases and raw data records

2. Submit Test questionnaire and test report

3. Regression and acceptance Testing

4. Output workpiece

Use valid and invalid data to perform each use case flow to verify the following:

A) Get the expected results when using valid data

b) display an appropriate error message or warning message when using invalid data.

6. Implementation Plan 6.1. Workload estimation

According to the work content and the project task to include the test design workload, test execution and test summary of the workload, in person months or days, and detailed comments on the test design, test execution and test summary of the proportion of work. The software testing workload should be the 30%-40% of the development workload.

Working stage

Business days required

Percentage of projects

Test Planning Phase

1

15%

Test design phase

1

15%

Test implementation phase

1

20%

Test Execution phase

1

20%

Test summary phase

1

15%

6.2. Staffing requirements and arrangements

The following table lists the staffing arrangements for this Test activity:

role

< strong> Staff

specific responsibilities / remarks

Test manager

Liu Junxian

    responsible for the overall arrangement of software testing oversight work

Test design

ma si mian

    Responsible for designing test scenarios and test cases

testers

Li Cong

    responsible for specific testing of the project according to the test plan

record people

Zhang Zixua

    Responsible for recording test information during system testing

6.3. Schedule

The following table lists the scheduling of the tests:

Project Milestones

Start time

End time

Output Requirements / Notes

Test planning

09:00

10:00

Test design

10:10

11:10

Test Design implementation

11:30

13:30

Test execution

14:00

15:30

Test summary

16:00

18:00

6.4. Deliverable Parts

This section lists the various documents, tools, and reports that will be created, their creators, deliverables, and delivery times.

7. Risk Management

L=low (Risk and processing priority is low) m=middle (risk and processing priority is Medium) H=high (High risk and processing priority)

Functional testing phase

Installation test phase

Document testing

Correctness

H

H

H

File integrity

H

H

H

Continuity of processing

M

M

M

Access control

M

M

M

Compliance

H

H

H

Reliability

H

H

H

Ease of operation

H

H

H

Maintainability

H

H

H

Portability

H

H

H

2. Description of the severity of the problem

Severity of problem

Describe

Fatal defect

1. Illegal exit due to the death caused by the program

2. Dead loop

3. The database has a deadlock

4. program interruption due to incorrect operation

5. Major feature loss or feature critical error

6. Connection error with Database

7. Data communication error

Serious defects

1. Program error

2. Program Interface Error

3. Tables, business rules, default values, and non-integrity constraints for the database

General defects

1. Operator interface error (including column name definition, consistency in data window)

2. Simple output limit not placed in front of the control

3. There are too many empty fields in the database table

Software Test proposal

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.