November May highlights of the carefree test Forum "One post per day"

Source: Internet
Author: User
From this is the forum moderator day network to test the spirit of the food, thank you

The1Post 【2004-5-10]: What is the ideal software testing model?

Brian Marick: I don't think there is any ideal pattern. I think it may be more effective for developers to undertake some tests, while other tests are conducted by independent test groups. Because if you hand over all the tests to an independent test group, they cannot have time to complete all the tests. Therefore, the best way is to let developers take a certain amount of tests, and the Independent Test Group gives them support. The independent test group tests the entire system to find defects that developers have not discovered, such as subsystem interaction, operating conditions, and memory usage.

How can we conduct system tests more effectively? Let the testers participate in the initial stage of the project, let them see the first version of the system requirements, user manual and system prototype, and capture and track the requirements before the system is implemented. During this process, they constructed the initial test design from these documents. This can also be done through inspection or review, and some defects will be found during this process. As we all know, the problem discovery at this stage is very "cheap.

In this way, system test engineers are involved in the early stage of the project to generate a list of test designs and basic projects to be tested. At this time, it is impossible to produce an absolutely complete test design, because the conditions for writing the complete test are not mature, but this is the basis for building the complete test.

Note: Brian Marick is a full-time test technical consultant for reliability software.

The2Post 【2004-5-11]: Test Manager Role Positioning

Johanna Rothman: The test Manager serves two completely different customers: test engineers and senior managers. For test engineers, the test Manager helps them develop product test policies, accumulate product test experience, and fully share the experience in the test group. For senior managers, the test Manager collects as comprehensive product information as possible for them to make decisions on whether the product can be released. But one thing is the same: for test engineers and senior managers, the test Manager will help them define and verify product release standards.

Definition and verification of product release standards: As a test Manager, you should seek opportunities to discuss product release standards with market and developers, and revise and verify the standards based on customer feedback. The work of the Development Department is how to meet the company's expectations for products, and how to work the products and products in the customer's eyes should be outlined for developers based on customer requirements. Once the product is clearly defined, it can be tested to verify to what extent the product meets the customer's needs.

It is important for test engineers to prioritize test tasks so that product release standards can be met. Since only a few projects have enough time to complete everything, it is an important responsibility to tell test engineers about "What to test and when to test" the test Manager.

Senior managers need to fully understand the product release standards to determine whether the product can be released on time. I don't think the test group has the right to decide whether the product should be published. This right is available to senior managers of the Organization. After a product release standard has been discussed and agreed upon, the project team can better understand and understand the product quality.

The3Paste 【2004-5-12]: Basic Principles of testing

(US) Roger S. Pressman
Before designing an effective test case, the test engineer must understand the basic principles of software testing. Here is a set of testing principles:
1. All tests should be traced back to user requirements. As we know, software testing aims to reveal errors. The most serious errors (from the user's perspective) areProgramErrors that cannot meet the requirements.
2. the test plan should be carried out within a long period of time before the test starts. The test plan can start as soon as the requirement model is completed. The detailed test case definition can start immediately after the design model is determined. Therefore, all tests should be performed in anyCodePlan and design before being generated.
3. The principle of "true" is applied to software testing. To put it simply, the principle of "true" implies that 80% of the errors found during the test probably originate from 20% in the program module. Of course, the problem is how to isolate these suspicious modules and perform a thorough test.
4. the test should start from "Small Scale" and gradually turn to "Large Scale ". The initial test usually focuses on a single program module, and the focus of further testing is to find errors in the Integrated Module cluster, and finally find errors throughout the system.
5. the exhaustive test is impossible. Even a program of moderate size has a large number of paths. Therefore, it is impossible to run each combination of paths in the test. However, it is possible to fully override the program logic and ensure that all the conditions used in the program design are possible.
6. In order to achieve the best results, an independent third party should construct a test. "Best results" refers to the tests that are most likely to discover errors (the main purpose of the test), so the software engineer who creates the system is not the best candidate to construct the software test.

The4Paste 【2004-5-13]: What is"Good"?

What is a "good" test? Kaner, Falk & Nguyen
1. A good test is highly likely to discover errors
To achieve this goal, the tester must understand the software and try to imagine how the software can fail. For example, there is a potential error in the GUI (graphical user interface), that is, the error recognition mouse position, then we should design a test set to verify whether there is an error in mouse position recognition.
2. A good test is not redundant.
The test time and resources are limited. There is no need to construct a test that is exactly the same as other test cases. Each test should have different purposes (even slightly different 〕. For example, a module in the software safehome is used to identify the user password to determine whether to start the system. To test the incorrect password input, the tester designs a series of input passwords. In different tests, enter a valid or invalid password (4 digits). However, each valid or invalid password will only detect one different error mode, for example, a system that uses 8080 as a valid password will not accept the illegal password 1234. If 1234 is accepted, an error will be generated. The input of another test is 1235, which is the same as the test intent of 1234, therefore, it is redundant. However, illegal input of 8081 or 8180 is slightly different. That is, passwords that are similar to valid passwords but are not the same should be tested.
3. A good test should be "the best breed"
In a group of similar tests, the time and resource restrictions may only affect the execution of a specific subset of the test. In this case, you should use the tests that are most likely to find all the errors.
4. A good test is neither too simple nor too complex.
Although a group of tests are sometimes combined into a test case, the side effects may block errors. Generally, each test should be executed independently.

The5Paste 【2004-5-14]: Software Testability

Roger S. Pressman
Ideally, software engineers should consider testability when designing computer programs, systems, or products, which makes it easier for test engineers to design effective test cases.

What is "testability "? Software testability refers to the ability and characteristics of the software to discover and isolate faults and locate faults, as well as the ability to conduct test design and test execution at a certain time and cost. James Bach describes testability: Software testability is the degree to which a computer program can be tested.

The following is a common software testability checklist:
· Operability-"The better the running location, the higher the efficiency of being tested. "
· Observability-"What we see is what we test. "
· Controllability-"The better the control over the software, the more automatically the test can be executed and optimized. "
· Destructibility-"by controlling the test scope, you can better break down problems and perform more agile retest. "
· Simplicity-"The less content you need to test, the faster the test. "
· Stability-"The fewer changes, the smaller the damage to the test. "
· Easy to understand-"the more information you get, the more clever the tests you perform. "

The6Paste 【2004-5-15]: Real-Time System Testing

Roger S. Pressman

The time dependence and Asynchronization of many real-time systems bring new difficulties for testing-time! The testing case designers consider not only white-box and black-box test cases, but also the concurrency of event processing (such as interrupt processing), data time series, and data processing tasks (processes. In many cases, the provided test data sometimes enables the real-time system to run normally in a certain state, and the same data sometimes leads to errors when the system is in a different state.

In addition, the close relationship between real-time system software and hardware will also lead to testing problems. In software testing, the impact of hardware faults on software processing must be considered. Such faults are hard to be simulated in real time. Due to the particularity and complexity of the real-time system, there is no comprehensive test case design method yet. However, it can be roughly divided into the following four steps:

1. Test the task. The first step to test the real-time system is to test each task independently. Design white-box and black-box test cases for each task and execute each task during the test. Task testing can detect logical and functional errors, but cannot detect time and behavior errors.

2. Behavior testing. By using the CASE tool to create a software model, you can simulate a Real-Time System and check its behavior based on the sequence of external events. These analysis activities can be used as the basis for designing test cases when creating a real-time system.

3. Test between tasks. After isolating the internal tasks and system behavior errors, the test will turn to time-related errors. Test asynchronous tasks that communicate with other tasks with different data rates and processing loads to check whether synchronization between tasks produces errors. In addition, test tasks that communicate through message queues and data storage to discover errors in the size of these data storage areas.

4. system testing. Integrate software and hardware and conduct a wide range of system tests to detect errors between software/hardware interfaces.

The7Paste 【2004-5-16]: Unit testing, integration testing, system testing, acceptance testing, and regression testing

Software Research
Unit test: A unit test is a test of the basic components of the software, such as a module and a process. It is the most basic and important part of software dynamic testing, and its purpose is to verify the correctness of the basic components of the software. The correctness of a software unit is relative to the specification of the unit. Therefore, unit testing is based on the specification of the unit to be tested. The main unit test methods include control flow test, data flow test, troubleshooting test, and domain-based test.

Integration test: the integration test is carried out during the software system integration process. Its main purpose is to check whether the interfaces between software units are correct. According to the integration test plan, it combines modules or other software units into an increasingly large system while running the system to analyze whether the system is correct and whether the components are co-occurrence. There are two policies for integration testing: top-down and bottom-up.

System Testing: system testing is a thorough test of the integrated software system to verify that the correctness and performance of the software system meet the requirements specified in its statute, it is not a simple task to check whether the software's behaviors and outputs are correct. It is called the "crowdsourced Security Testing ". Therefore, the system test should be carried out according to the test plan, and its input, output and other dynamic operation behaviors should be compared with the software specification. There are many methods for testing software systems, including functional testing, performance testing, and random testing.

Acceptance Test: the purpose of the acceptance test is to demonstrate to the software purchaser that the software system meets the needs of its users. The testing data is usually a subset of the testing data of the system test. The difference is that there is often a buyer representative of the software system in the acceptance test, or even in the field where the software is installed and used. This is the final test of the software before it is put into use.

Regression testing: regression testing is a test performed after the software is modified in the software maintenance phase. The purpose is to check whether the modifications made to the software are correct. Here, the correctness of the changes has two meanings: first, the changes have achieved the intended purpose, such as correct errors and adapting to the new running environment; second, it does not affect the correctness of other functions of the software.

The8Paste 【2004-5-17]: Software testing strategy

Roger S. Pressman
Testing is a series of activities that can be planned in advance and managed systematically. For this reason, a software testing template should be defined for the software engineering process-a series of steps that we can place specific test case methods.

Many software testing strategies have been proposed, all of which provide developers with a template for testing, and all of them contain the following generic features:
· The test starts at the Module layer and then "extends" to the entire computer-based system collection.
· Different testing techniques apply to different time points.
· Testing is managed by software developers and independent test groups (for large systems.
· Testing and debugging are different activities, but debugging must be able to adapt to any testing strategy.

Software testing policies must be provided to test a short segmentSource codeWhether a low-level test can be implemented correctly or not, as well as a high-level test that can verify whether the functions of the entire system meet user requirements. A policy must provide guidance to users and a series of important milestones for managers. Because the test policy step starts when the stress of the software's completion deadline has begun to appear, the test progress must be measurable, and the problem should be exposed as early as possible.

The9Paste 【2004-5-18]: White box test

Rex black
White-box testing, also known as structured testing and code-based testing, is a test case design method that exports test cases from the control structure of the program. The test cases generated by testing in a white box can:
1) ensure that all independent paths in a module are used at least once;
2) test true and false for all logical values;
3) run all cycles in the upper and lower boundary and within the operational range;
4) Check the internal data structure to ensure its effectiveness.

"We should pay more attention to the implementation of program requirements. Why do we need to spend time and energy worrying about (and testing) logic details? The answer lies in the software's own defects:
1. logical errors and incorrect assumptions are inversely proportional to the possibility that a program path is run. When we design and implement features, conditions, or controls outside of the mainstream, errors often start to appear in our work. The daily processing is often well understood, while the "special case" processing is difficult to find.
2. We often believe that a logical path cannot be executed. In fact, it may be executed on a normal basis. The logic flow of the program is sometimes intuitive, which means that some of our unconscious assumptions about the control flow and data flow may lead to design errors, which can only be detected by path tests.
3. Pen mistakes are random. When a program is translated into the source code of the program design language, some mistakes may occur, many of which will be discovered by the syntax check mechanism. However, others will be discovered at the beginning of the test. The probability of a clerical error appearing in the mainstream is the same as that of a non-obvious logical path.

As beizer said, "errors are lurking in the corner and gathered on the boundary", while white box tests are more likely to find it.

The10Paste 【2004-5-19]: Black box testing

Black box testing focuses on functional requirements of testing software, that is, black box testing enables software engineers to derive input conditions for all functional requirements of the execution program. Black box testing is not a substitute for white box testing. It is used to assist white box testing in discovering other types of errors. Black box testing tries to find the following types of errors:
1) function errors or omissions;
2) interface error;
3) data structure or external database access error;
4) performance error;
5) initialization and termination errors.

White box testing was adopted in early stages of testing, while black box testing was mainly used in later stages of testing. The black box test intentionally does not consider the control structure, but focuses on the information domain. Black box testing is used to answer the following questions:
1) how to test the Function effectiveness?
2) What type of input will produce good test cases?
3) is the system particularly sensitive to specific input values?
4) how to separate the boundaries of data classes?
5) What data rate and data volume does the system support?
6) What is the impact of a specific type of data combination on the system?

Using the black box testing method, you can export test cases that meet the following criteria:
1) The designed test cases can reduce the number of additional test cases required for reasonable testing;
2) The designed test cases can inform you of the existence or absence of certain types of errors, rather than just errors related to specific tests.

The11Paste 【2004-5-20]: Software Testing adequacy criteria

(1) null testing is inadequate for any software.
(2) There is a limited set of sufficient tests for any software.
(3) If a software system is fully tested on a test dataset, it should be sufficient to test more data. This feature is called monotonic.
(4) even if all components of the software are fully tested, it does not mean that the entire software has been fully tested. This feature is called non-composite.
(5) even if a software system is fully tested, it does not mean that all components of the software system have been fully tested. This feature is called non-decomposition.
(6) The adequacy of software testing should be related to software requirements and software implementation.
(7) the more complex the software, the more test data is required. This feature is called complexity.
(8) The more tests, the less adequate the tests can produce. This feature is called the return delivery rate.

The12Paste 【2004-5-21]: Static Test

Each document generated during software development must be tested to determine whether its quality meets the requirements. This inspection work is consistent with the idea of comprehensive quality management and with the project management process. Every time a document passes a static test, it marks a summary of a development work, marking the progress of the project and entering a new stage.

The basic feature of static testing is that the tested program does not actually run during software analysis, inspection, and testing. It can be used to test various software documents and is one of the most effective quality control methods in software development. In the early stages of software development, it is impossible to conduct dynamic tests because the code that can be run has not yet been generated. The quality of intermediate products in these stages is directly related to the success or failure of software development and the amount of overhead, therefore, static testing plays an important role in these stages. Based on years of practical production experience and lessons learned in software development, some effective static testing technologies and methods have been summarized, such as structured access and formal inspection. These methods and testing techniques can be combined with the quantitative measurement technology of software quality to monitor and control the software development process, so as to ensure the software quality.

The13Paste 【2004-5-22]: What is a test requirement?

Brian Marick
The concept of test requirements is relatively simple. For example, for a program that calculates the square root, If you input a number greater than or equal to zero, the program can give a result. If you input a number smaller than zero, the program will indicate an input error. Engineers who have read the art of software testing immediately think of boundary values. Test the zero value, and test the negative number that is very close to zero, which is two specific test requirements.

In a more complex program, you can create a list of projects to be tested. However, none of these test requirements determine the specific test data. For example, for a bank transaction program, one test requirement is to try to pay the customer's amount as a negative number, and the other test requirement is that the customer in the transaction does not exist, and so on. You have a series of such test requirements, which do not indicate specific values or data, such as the customer's name.

The next step of the test is to select the input values/test data that meet these test requirements. A simple test case may meet several testing requirements at the same time. A use case can meet several testing requirements at the same time. Of course, this is the ideal situation, but it is costly. Another way is to design a separate test case for each test requirement, so that you do not have to consider the complex test cases, however, the ability to discover defects in these relatively simple test cases will decrease.

Here is an example of a test requirement: To test the insert operation of a hash table, there are the following test requirements:
1) Insert a new entry
2) insertion failed-the entry already exists
3) insertion failed-the table is full
4) the hash table is empty before insertion.
These are test requirements, not test cases, because they do not describe the inserted elements. In addition, you cannot write the case immediately, just as you cannot encode the case immediately after the software requirement is completed. You also need to review the test requirements to ensure they are correct and there are no missing requirements.

The14Paste 【2004-5-30]:GuiTest

Roger S. Pressman

The graphical user interface (GUI) poses an interesting challenge to software testing. Because the GUI development environment has reusable components, developing user interfaces is more time-saving and accurate. At the same time, the complexity of the GUI is also increased, making it more difficult to design and execute test cases. As there are more and more similar GUI designs and implementations, a series of testing standards are generated. The following questions can be used as a guide to common GUI testing:

· Is the window properly opened based on relevant input and menu commands?
· Can windows be resized, moved, and rolled?
· Can the data content in the window be accessed with the mouse, function key, direction key, and keyboard?
· Can windows be correctly regenerated after being overwritten and called again?
· Can all window-related functions be used when necessary?
· Are all window-related functions operable?
· Is there any related drop-down menu, toolbar, scroll bar, dialog box, button, icon, and other controls that can be used for the window and displayed properly?
· Is the window name properly displayed when multiple windows are displayed?
· Is the activity window properly highlighted?
· If multiple tasks are used, are all windows updated in real time?
· Will repeated or incorrect mouse clicks lead to unexpected side effects?
· Does the window sound and color prompts and Operation Sequence meet the requirements?
· Is the window properly closed?

Drop-down menu and mouse operation:
· Is the menu bar displayed in a proper context?
· Does the menu bar of the application display system-related features (such as Clock Display )?
· Can the drop-down operation work correctly?
· Is the menu, palette, and toolbar working correctly?
· Are all menu functions and drop-down sub-functions properly listed?
· Can all menu functions be accessed with the mouse?
· Is the text font, size, and format correct?
· Can I use other text commands to activate each menu function?
· Is the menu function highlighted or dimmed with the current window operation?
· Is the menu function correctly executed?
· Is the name of the menu function self-explanatory?
· Are menu items helpful and context-related?
· Can mouse operations be identified throughout the interactive context?
· Can I correctly identify the context by clicking the mouse multiple times?
· Does the cursor, processing indicator, and recognition pointer change as appropriate?

Data items:
· Can alphanumeric data items be correctly displayed and input to the system?
· Does a data item (such as a scroll bar) in graphic mode work normally?
· Can illegal data be identified?
· Is the data input message understandable?

The15Paste 【2004-5-31]:Client/ServerTest

Roger S. Pressman

Generally, Client/Server Software Testing takes place at three different levels:
(1) individual client applications are tested in a "isolated" Mode-regardless of the running of servers and underlying networks;
(2) client software and associated server applications are tested together, but network operations are not obviously considered;
(3) The complete C/S architecture, including network operation and performance, is tested.

The following test method is frequently used in C/S applications:
Application function test-client applications are executed independently to reveal errors in their running.
Server test-test server coordination and data management functions, and also consider server performance (overall reflection of time and data throughput ).
Database test-test the accuracy and integrity of the data stored on the server, and check the transactions committed by the client application to ensure that the data is properly stored, updated, and retrieved.
Transaction test-creates a series of tests to ensure that each type of transaction is processed as needed. The test focuses on the correctness of the processing and performance issues.
Network Communication test-these tests verify that communication between network nodes is normal and that message transmission, transactions, and related network traffic are normal.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.