Automation Test Control Click _ Test Career Development

Source: Internet
Author: User



UI Automation testing, the primary consideration is the test tool or framework we choose to support the test program. And this support, is mainly through the control of the identification and operation to reflect; However, regardless of how a test tool or framework supports a test program, it executes the test program in the absolute coordinates of the screen, although we can usually hear a lot of people say, try to avoid using coordinates.
Try to avoid the use of coordinates and ultimately through the coordinates to identify, this looks a bit of a conflict, but it does not conflict, is not a bit similar to the feeling of Tai Chi.
Coordinates, usually divided into two categories: absolute coordinates and relative coordinates.
1. Usually the coordinates we call are based on the absolute coordinates of the upper-left corner of the screen. The test tool and our people's operation finally also through this coordinates to carry on the test program operation, only then some we know, some we do not know, only this.
In layman's terms: we manually click on the screen, just click on the location <x,y> send a screen Click event, this screen click event through the Windows Library to navigate to the current active form, and then to a specific location of the form to pass the Click event, so as to get a response. This process, in our senses and requirements, we simply need to operate directly on the active form of the results, as to positioning, who care.
Automated testing is essentially the act of simulating human behavior, so the implementation process is broadly similar. It first through our code in the ID, text, index, class, etc. to locate the control we want to manipulate, and then read the control's X, Y properties to send the Click event. However, at the use level, we only see through ID to click, as for other get coordinates these, leave them alone.
2. When it comes to relative coordinates, this is a little bit more complicated. The relative methods currently known are relative to upper left, lower left, upper right, bottom right, middle, middle, and center, and depending on whether the parent window changes, it can be categorized as: Do not expand, Synchronize extension (zoom in/out with the parent window), scaling (the parent window expands 3 times times, and the object expands 3 times times).
For example, there is a notepad window, "notepad. txt", which coordinates the absolute coordinates of 400,600, the Start menu bar of this notepad, and its coordinates are 450,700. If you assume that the coordinates of the Notepad window are <x,y&gt, then this Start menu bar can be described as: relative to the window "notepad. txt" coordinates 50,100, absolute coordinates can be represented as <x+50,y+100> , and then assume that the Edit button in the menu bar has an absolute coordinate of 500,700, then it can be described as: relative to the menu bar coordinates 50, 0, absolute coordinates can be represented as <x+50+50,y+100+0>. In this case, all the coordinates on this window can be maintained with a single coordinate, if the window position changes, we also need to modify a top-level parent window coordinates can dynamically fit all the button coordinates.
The above is the relative explanation. In fact, the so-called relative coordinates, is an optimization of the coordinate calculation method, we can use as little changes to adapt to more changes.
As for its relative way and extending way, it is only some of the calculation methods, here is not one of the examples.
So, what can we do to understand the above?
You will always encounter some unrecognized custom controls, especially app type programs, when you are conducting automated tests. Then we can use the above coordinates to improve the solution.
Now, let me simply say a simple way to realize the control recognition based on coordinates:
1. Realize the idea: through the virtual screenshot to provide a fast positioning of the virtual control of the coordinate system, and alignment to give some additional recognition parameters, embedded in other test tools to use directly.
2. First of all, by invoking the Windows API or other screenshot program, the test program's full window truncated a translucent screenshot (can be scaled);
Then, get the coordinates of the test program and monitor the drag and drop events on the screenshot to compute the coordinate system of the control and write the XML file;
Second, the XML file is imported into the test tool's object library for use. Note here that some of the test tools do not support external custom objects, so some transformations are required.
3. Code: Send it back later. This piece of stuff is best made visible and easier to operate.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.