When doing automated testing, especially UI-level automation, how to make your code more robust may be a problem you often have to consider, and here are a few tips to share.
Multi-use waitforxxxx
Strictly speaking, any long time "hard wait" is advisable!! Random sleep () can only show that your skills are scarce and the case you write is inefficient. So at this time, we should all use the Waitfor method, and any automation framework has a similar approach, like the Robotium:
// waits for the dialog to close // If there is a activity change // If a certain text appears after the loading was done // If a certain view is shown after the load screen was done.
Ranorex also has many of these features to help you wait for specific elements to appear.
But what if a similar approach is not found in the framework to meet the actual test scenario? OK, then we can only do it by ourselves.
implement your own Waitfor method
In fact, think carefully, such a method in the final analysis still need sleep action, after all, is to wait, but we have to do is to let it be more intelligent to wait. such as the following:
/*** Wait for the view with ID of shown in the timeout * *@paramID The The view *@paramTimeout The amount of time in milliseconds to wait *@returntrue that the view is shown, false otherwise*/ Public<textendsView>BooleanWaitforviewshown (Final intIdFinal intTimeout) { Final LongEndTime = Systemclock.uptimemillis () +timeout; BooleanFoundmatchingview =false; View view; while(Systemclock.uptimemillis () <endTime) {solo.sleep (global.sleep_flash_time); View=Solo.getview (ID); if(View! =NULL) Foundmatchingview=View.isshown (); if(Foundmatchingview)return true; } return false; }
We can set a timeout time, so that in the time range, as long as the target is found, immediately return, if not found, then the next round of cycle. This way, when the object to be manipulated does come out, there is likely to be a lot less error in the next steps.
In addition, the small error does not mean that there is no error, there may be some inexplicable situation to let your code execution failed. While it is important to try to analyze errors and find root cause that ultimately lead to case failure. But perhaps considering the cost and price, perhaps a simple failure to try again, will be more useful to you.
If you fail and retry the mechanism, make your code more robust
The failure retry mechanism mentioned here does not mean that the whole case should be re-executed, but that when you find one or more actions that may be caused by a particular scenario that is not controlled, then we can use a few simple lines of code to make the risk more manageable. For example, the following methods:
/// <summary> ///Open the bottom app bar/// </summary> Public Static voidOpenbottomappbar () {Report.info ("Running in Openbottomappbar method."); System.DateTime Endtime= System.DateTime.Now.AddSeconds ( -); while(System.DateTime.Compare (System.DateTime.Now, Endtime) <=0) { //Move Cursor to center of HP Connected Drive View and right click to open Filter menuSize point =repo. HPConnectedDriveView.Self.Element.Size; Mouse.moveto (point. Width/2< -? -:p oint. width/2,Ten); Delay.seconds (1); Mouse.click (System.Windows.Forms.MouseButtons.Right); if(Utility.findelement (Repo. HPConnectedDriveView.BottomAppBar.BottomAppBarInfo, +*2)) { return; }} report.screenshot (); }
Also use the WAITFOR mechanism mentioned earlier, the use of the scene is more extensive, here will be a circular judgment, in the end this bottomappbar out no, if not, I will point to the following, and in 30 seconds, until it opens.
It is important to note that this is the time -to-Timeout to control the retry range, but it is suitable for fault-tolerant control of single-or limited-step operations , if it is a large module ( module), or the whole case, it might be better to use retries directly, after all, when it goes up to the big module, our primary goal is to let the case execute past rather than consider the time frame.
use retry times to control the execution of large modules or entire case
such as the following:
/// <summary> ///Create a Snapfish account via Snapfish website/// Http://www.snapfish.com/snapfish/home /// Http://us1.sfint1.qa.snapfish.com/snapfish/home /// </summary> /// <param name= "Stack" ></param> /// <returns></returns> Public Static stringCreatesfaccount (stringstack) { inti =0; stringresult =NULL; while(i<2) {i++; Opensnapfishsignuppage (stack); Delay.seconds (4); Clicksignuplink (); Delay.seconds (1); stringEmailprefix ="Asgqa.hpcd.SF"; stringEmailpassword ="asg111111"; stringsuffix =Utility.getuniquenumber (); stringemail = emailprefix + suffix + utility.mapstack (stack) +"@hp. com"; Email=inputsfregisteredaccountinformation (email, emailpassword, emailpassword); Clicksfsubmitbutton (); Result= email +"|"+Emailpassword; if(Utility.findelement (Repo. Snapfish.signoutinfo, +* -) {validate.istrue (true,"Register Snapfish account successfully email="+ Email +"password="+Emailpassword); returnresult; } Else{Report.warn ("Failed to get the successful infor after registering the SF account, take screenshot for reference"); Report.screenshot (); Report.info ("Click F5 to refresh the page"); Repo. Snapfish.Self.PressKeys ("{F5}"); if(Utility.findelement (Repo. Snapfish.signoutinfo, +* -) {validate.istrue (true,"Register Snapfish account successfully email="+ Email +"password="+Emailpassword); returnresult; }}} Report.warn ("still unable to create Snapfish account after try twice, take screenshot for reference"); Report.screenshot (); Report.warn ("Sometimes I found even we registerd successfully, Snapfish website page still didn ' t refreshed, so I'll use the Regi stered account to login hpcd to try it again"); returnresult; }
This is an example of the purpose is to go to a website, create an account, my goal is to create a good account on the line, not to test the site, so here if the operation fails, I will try again, until three times the opportunity to run out, I believe that even if the retries, the effect is the same, so return it directly.
Summary
Building robust Test code is the goal of our testers, just as development requires constant refactoring of code to make the code more concise and elegant, and its nature is exactly the same. Here is just I often use a few small tips, crossing, if you have any comments and suggestions, hope to enlighten.
If you have read this blog, feel that you have something to gain, please click on the following [recommended]
If you want to reprint this blog, Please specify the source
If you have comments or suggestions for this article, please leave a message
Make your Automation code more robust