Excellent layered automated test practice

Source: Internet
Author: User

1. Background

First of all, the concept of automated testing, in a broad sense, all through the tool (program) to replace or assist manual testing behavior can be automated. In a narrow sense, the process of manual testing is simulated by scripting, instead of manually validating the function of the system.

Have praise is an internet industry start-up company, the test started late, release very frequently, even if each time only return to the core function, the number of very few testers of a lot of work, and basically repetitive work, extremely boring, long time is also prone to error.

So the initial approach to test automation is simple: from a real user perspective, simulate real-world operations and replace the execution of existing manual test cases. As a result, each duplication of work can be replaced with automation, and testers only need to focus on incremental requirements for each release.

As the number of scripts increases, the drawbacks of this automated approach to coverage are gradually exposed:

    • Inefficient execution
    • The construction success rate is low (high false alarm rate).
    • Greatly affected by front-end style changes
    • More external dependencies, not all use cases can be automated
    • Limited coverage capacity

Although we implemented script concurrency and failure use case retry mechanisms to improve execution efficiency and reduce false positives by combining Selenium-grid at the test framework and tool level, this approach only alleviates the problem and does not solve the problem of incomplete coverage at all.

Just in time to catch up with the company's SOA service process, testing this side also began to cooperate with the transformation of automation, from the original black box system-level automated testing to layered automated test transformation.

2. Layered automated Testing

Before we talk about layered testing, let's review several concepts:

    • Unit Testing: Check and verify the smallest testable unit in the software. Specifically, a small piece of code written by a developer to verify that a small, well-defined function of the code being tested is correct. Typically, a unit test is used to determine the behavior of a particular function under a particular condition (or scenario).
    • Integration testing: The integration test is based on the unit test, the test of all the software units in accordance with the requirements of the outline design specifications of the assembly into modules, subsystems or systems in the process of the work to achieve the corresponding technical indicators and requirements of the activities. That is, the unit test should have been completed before the integration test. This is important because, without unit testing, the effectiveness of the integration test will be greatly affected and the cost of error correction for the software unit code will be greatly increased.
    • System testing: The software to be tested, as a whole computer system-based element, with the computer hardware, peripherals, some supporting software, data and personnel, and other system elements and the environment together test. The purpose of the system testing is to compare the requirements of the system with the definition of the system, and find out that the software is inconsistent or contradictory with the definition.

Let's talk about how it's going to evolve layered automation testing as the system splits SOA services. Let's take a look at the classic Test pyramid: Where the unit is representative of units testing, the service represents the Services integration test, and the UI represents a page-level system test. Layered automated testing advocates that the different levels of the product require automated testing, and this pyramid also represents the effort and effort required at different levels. Below I will introduce the best layered automation practices.

2.1 unit-Unit Test

Before the system split, there was a huge Big Mac system, and unit testing was extremely missing. In the process of system gradual SOA service, we put forward the requirement of unit test coverage gradually.

Our unit tests are tested on the DAO layer and the service layer, respectively. The unit test of DAO layer mainly guarantees the correctness of SQL script, and it can be written using the DAO layer as the correct premise when doing the unit test of the service layer.

To do fine-grained testing, you need to address the external dependencies of unit tests. The dependencies between the system and the module can be decoupled through the mock framework (Mockito/easymock), and the reliance on the database can be resolved in conjunction with H2database, allowing the test case to run wherever and whenever possible.

The cost of finding and solving the problem is relatively low, and the maintenance cost of automation use case is not high, in general, the input-output ratio of automated testing is the highest.

The responsible body of unit tests is generally a developer, and writing unit tests is also a process for developers to check their own code.

2.2 Service-Service Integration test

Our testing at the service level is primarily about integration testing of each system (subsystem). Because after the unit testing this layer of protection, at the service level we are more concerned about whether a system's input and output function is correct, and the interaction between several systems and the requirements of the business scenario is consistent.

Let's take a look at the SOA System Application Architecture diagram after our system split:

    • Presentation layer: The old iron application, the code for PHP. After splitting iron only the presentation layer logic that interacts with the front end, and the API layer that invokes the core business
    • Core business: Iron System split core business

The subject of this layer is the code that is pumped out of the presentation layer (front-end and partial back-end presentation layer logic).

Since the likes of the Test started late (many startups have a similar situation), testing resources are scarce, code coverage is poor. So our initial automation use case coverage strategy is this:

    • API interface from the old iron application as a pointcut for business scenario coverage
    • Priority coverage of core business scenarios
    • With system split, overrides the split system.
    • A system that has been split out to test coverage of the system service layer (fully covering the interface of the service)
    • Test-dependent data preparation overrides the way the system interface is invoked (to increase business coverage)
    • The test method gradually shifted from black box to gray box/white box

The advantage of this is that you can quickly increase the coverage of the business scenario, while the pre-prepared API interface use case can act as a smoke test case after the system is split, acting as a regression to the core old functions (just doing the system split, the business logic and the interface behavior exposed to the presentation layer unchanged). After all, in the process of automated testing, the most feared is the change, will bring more script maintenance work, and the use cases covered in this way, the maintenance cost is very low.

Let us introduce the basic patterns of our use cases in the early stages of this layer:

    • Focus on the business scenario, consistent with the UI script, except that the script changes from the action page to the calling interface. With respect to UI Automation, the interface testing of the service layer is more stable, and test cases are easier to maintain. The service layer interface test can focus more on the logical (business) validation of the system as a whole, while UI Automation is transformed into an integrated validation of the page presentation logic and interface front-end and service (this is described in the UI layer).
    • Do not make mock-up between systems for the time being. More consideration of the coupling and dependence between systems.
    • The external dependencies of the Mockserver solution system, mainly similar to the payment of third-party systems (about our mockserver will have a special article introduction).

In conjunction with our trading system, for example: the trading system will depend on the goods and marketing activities, then our next order scenario of the use of the product and marketing of the system's API construction data as a precondition for use cases, and then according to the business scenario of the order to invoke the trading system of the next single interface, Verifies the return value and writes the data to the DB, and finally completes the data cleanup work.

Our test framework choices on this floor are based on the company's General Service Framework (Nova), which is composed of the following:

BIT: Service Interface Integration test (Business interface&integration test)

SUT: Tested system (systems under test)

    • Validator: Database Results validation encapsulation for each business
    • Mock: A data mock service that relies on a use case precondition
    • HttpClient: Returns a generic Rpcresult object based on the interface of the iron system
    • Util: Common Tool class encapsulation
    • Biz: This encapsulates the externally exposed interfaces of all the systems under test for direct invocation
    • TestCase: Our service interface testing is also divided into SDV (System design verify-Systems validation) and sit (System integration test-Systems integration testing). In accordance with the above mentioned use case coverage strategy, we are before the system is split, based on the system's business scenario and the rest interface to complement the core interface integration test cases, and subsequently can be used as a smoke use case after the system split. After the system has been split, the system's test cases are supplemented in detail, with finer granularity.
    • Facade: Combined with the Nova framework, externally publishes the rest interface for the data structures of each business, and can be used as a test data construction system to assist testers in their manual testing

This layer of testing coverage is primarily performed by testers and is where testers are doing their part.

We do not need to know very much about the implementation of the code, but our use cases fully reflect our understanding of the structure of the system, the relationship between the modules and so on.

Our propulsion strategy for service layer Automation testing is:

    • Gradually enrich the SDV layer of test cases, and to a certain extent, use case-dependent systems decoupling, such as the data structure from the calling interface to directly to the database to write Data transformation.
    • Gradually refine the split business scenario and do a good job of decoupling the use case.
    • Prioritize the scene overlay, and then consider the code overlay.
2.3 ui-Show Test

First of all, since the beginning of the article mentions that UI Automation testing has so many drawbacks, so costly, is it necessary to automate the UI layer? The answer is yes, because the UI layer is what our product ultimately presents to the user. So after doing the above two layers of testing coverage, testers can devote more effort to the UI layer of testing. It is because testers are putting a lot of effort into the UI layer that we need to automate to help us liberate some of the repetitive labor force.

Based on our UI-layer automation practices, let's mention the principles of our UI-layer Automation coverage:

    • Be able to do automated coverage at the bottom, and try not to automate coverage at the UI level
    • Automate coverage with only the most core features, and script maintainability as much as possible

Our approach to improving the maintainability of our UI scripts is to follow the page object design pattern.

Page Object

The Page object pattern is intended to avoid the abstraction of a Web page by manipulating HTML elements directly in the test code. Benefits include:

    • Reduce the redundancy of test code
    • Improve the readability and stability of your test code
    • Improved maintainability of test code
A simple example

Example of a sign-in operation that likes the home page (Ruby):

class LoginPage    include HeaderNav  def login(account, password)    text_account.wait_until_present.set(account)    text_password.set(password)    button_login.wait_until_present.click    return MainPage.new(@browser)  end  private  def text_account    @browser.text_field(:name => ‘account‘)  end  def text_password    @browser.text_field(:name => ‘password‘)  end  def button_login    @browser.button(:class => ‘login-btn‘)  endend  
    • The public method exposes the service to the page, which is the login behavior for the login page
    • The UI details of the page are set to private method hidden
    • Jumps to the new page after the return of this method after the object of the page, such as login after the jump to the Home (MainPage). Even the same page can return self to do chained operations.
    • The public part of each page, such as the top navigation of the page, can be encapsulated into a module for each page object to include directly

Let's take a look at the test case:

class TestLogin < Test::Unit::TestCase    def testLogin    @browser = Browser.new    @browser.goto ‘youzan.com‘    main_page = @browser.login_page.login(‘xx‘, ‘123‘)     #断言  endend  

In this way, the final test script renders simple page manipulation logic and is closer to the text test case.

Here's a look at our testing framework:

    • Base: This layer is much the same as most UI test frameworks, using selenium and Watir, and use case management without using the hottest BDD framework cucumber in Ruby, but the most basic unit testing framework minitest. Ruby's multithreaded packages are also introduced to match the concurrent execution of UI scripts.
    • Actir: Our own packaged test framework
      • Initializer: Automatically loads all ruby files according to the agreed engineering structure, and automatically generates an object instance of all the page classes based on the class name and reflection of the page.
      • UA: Encapsulates the browser user-agent required for the test.
      • Executor: Use case executor. Based on the multi-threaded package and Selenium-grid of Ruby, the scheduling and distributed execution of all use cases can greatly improve the efficiency of UI script execution to some extent. The executor also includes a failure use case retry mechanism.
      • Util: Tool class, including configuration file read-write, data-driven, etc.
      • Report: Automatically generates test reports in HTML format based on the final result of the UI test script (the use case with the failed retry is the final result).
      • Cli: Based on the above functions of the Actir framework, the encapsulated command-line tool facilitates continuous integration.
    • Project
      • Pages: Objects wrapped out of pages based on PageObject mode
      • Components: The common parts of each page or plugin, upload, address selection, etc. Packaged as a module for each page object to be directly include when needed.
      • Item: Objects abstracted from the business of the system, such as orders, coupons, commodities, etc.
      • User: The role that is abstracted from the system business and its action, such as buyer's purchase behavior, Buyer's refund, shipping, and so on.

As service layer automation coverage becomes more and more high, automation coverage of the UI layer is gradually transformed into an integrated validation of the page presentation logic and interface front-end interaction with the service's presentation layer. Our subsequent evolutionary planning for UI layer automation is this:

    • Environment-dependent mocks that disassociate external dependencies of UI scripts
    • Complete data preparation that allows UI Automation to focus on automatic validation of page business logic through mock-up of backend service interfaces.
    • The page alignment check UI style.

Automated testing of the UI layer is also the responsibility of the testers, who should not devote too much effort and resources to automating the coverage of the core business scenario. Even if we improve the maintainability of the script to a certain extent, but after all, automation testing is the most afraid of change, and UI interface is the most frequent change of the layer, so still have to invest a certain amount of effort to maintain, right?

3. Continuous integration

With the automated test scripts for each of these layers, we need to build a continuous integration system. The purpose of continuous integration:

    • Process automation to increase productivity
    • Maximizing the value of automated test scripts

Our continuous integration is based on Jenkins, and the main actions are as follows:

    • Code submission automated Unit test execution
    • Automatic deployment of the overall environment after unit testing
    • Automate integrated Automated Testing (SERVICE/UI)
    • Automatically generate detailed test reports for builds and automatically notify relevant people

The support required for continuous integration is:

    • Test environment Automatic deployment script
    • Code Coverage Auto-collection
      • How Java apps are based on Jacoco+jenkins plugins
      • How PHP applies the xdebug+phpunit
    • Test report related plugins and scripts
    • Code static Check, etc.

For continuous integration our follow-up evolution planning is geared towards continuous delivery and continuous deployment, and automatically deploys the code to the test environment on a continuous integration basis, allowing testers to perform manual testing.

4. Summary

In this paper, we mainly introduce the layered automation practice in the process of liking SOA, and the development direction of the following, and introduce the related test framework structure briefly. Let's review the main points of our layered automation from the overall:

    • Unit tests:
      • Highest priority level
      • The most granular, full coverage
      • Development and implementation
    • Service Testing
      • Highest priority for testing
      • From the perspective of business scenarios
      • System external Interface 100% overlay
      • Focus on the dependencies and calls between systems
      • Test implementation
    • Page testing
      • Relatively low priority
      • Only the core functionality of the automation coverage, only focus on the UI layer of the problem
      • Reduce reliance on background data through data mocks.
      • Test implementation

As to the specific proportion of each level, it is necessary to plan according to the needs of the project. In the "Google test" book, for Google products, 70% of the investment is unit testing, 20% for integration, interface testing, 10% for the UI layer of automated testing.

Finally, let me put some ideas:

    • The lower the level of automation, the higher the yield
    • Quality is not a matter for a tester.
    • The purpose of automated testing is not to reduce manual testing, but to do more meaningful manual testing for testers


In the absence of any special instructions, this document is copyrighted and licensed by the author and the technical team, using the Attribution-NonCommercial 4.0 International license.
Reproduced please specify: from the likes of the technical team blog http://tech.youzan.com/layers_test_automation_practice/

Excellent layered automated test practice

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.