UI Automation Testing with Sikuli is a lot easier, without using AppleScript's annoying syntax, as long as the UI of the interface doesn't change, the change in structure doesn't affect Sikuli-based automation, but AppleScript-based will be affected.
and using image recognition for automation is much closer to real manual testing than using scripting automation, after all, the human eye to identify the control, so the changes in the UI will affect the Sikuli automation, and for the BVT level of automation, focus is focused on whether the basic function is normal, It is better to be less sensitive to changes in the UI of the control.
When I implement the BVT automation of a product, I need to operate some video thumbnails to upload to the cloud, and the test downloaded from the cloud, which relies heavily on the identification of its test footage thumbnail, and the product in the test process found a thumbnail is not normal to produce a bug, This bug is bound to affect the testing of our other BVT functions. In order not to make this bug block the validation of our other basic functions, I use AppleScript to get the coordinates of its test footage in the library, and then use a tool Cliclick to get the coordinates of the mouse click and double-click.
Here is an introduction to this gadget that simulates mouse and keyboard manipulation on the website:
"Cliclick" is a short for "Command-Line Interface click". It is a-a tiny shell/terminal application that would emulate mouse clicks or series of mouse clicks (including doubleclicks and control-clicks) at arbitrary screen coordinates. Moreover, it lets you move the mouse, get the current mouse coordinates, press modifier keys etc.
First, use AppleScript to get the coordinates of this footage (since the focus of a thumbnail operation should be its center position, so in the following script, one calculation is performed, and the coordinates of the middle point are output):
1 On run argv2Set Clip_name to item 1of argv3Tell application"RealTimes"4 Activate5 End Tell6 7Tell application"System Events"8Tell process"RealTimes"9 Tell UI element 0 of scroll area 0 of group 0 of splitter Group 0 of splitter Group 0 of window 0TenTell (1st image whose title isclip_name) One set P to position A set S to size - -Set X to (item 1 of p) + (item 1 of s)/2) theSet Y to (item 2 of P) + (item 2 of s)/2) - -Set Output to (""& X &","&y) -Do shell script"Echo""ed form of output + End Tell - End Tell + End Tell A End Tell atEnd Run
In Python, the above script is called to get the coordinates, and then the different parameters of Cliclick are implemented according to the required actions:
1 defClick_item_by_cliclick (name):2Command = Cliclick_path +"c:{0}"3 _run_for_cliclick (name, command)4 5 6 defDouble_click_item_by_cliclick (name):7Command = Cliclick_path +"dc:{0}"8 _run_for_cliclick (name, command)9 Ten One defRight_click_item_by_cliclick (name): ACommand = Cliclick_path +"Kd:ctrl c:{0} Ku:ctrl" - _run_for_cliclick (name, command) - the - defCheck_exist_by_cliclick (name): - Try: -Run_apple_script ("Get_clip_position_in_library.applescript", name) + exceptException: - returnFalse + returnTrue A at - def_run_for_cliclick (name, command): -CO = Run_apple_script ("Get_clip_position_in_library.applescript", name). Strip (). Rstrip ('\ n') -Time.sleep (1) -Command =Command.format (CO) -RET =exec_command (command) in ifRET[0]! =0: - RaiseException (ret[1]) to + - defMultiple_select_by_cliclick (names): theCommand ="Kd:cmd" * forNameinchnames: $Command + ="C:"+ Run_apple_script ("Get_clip_position_in_library.applescript", name). Strip (). Rstrip ('\ n')Panax NotoginsengCommand + ="Ku:cmd" -Command = Cliclick_path +Command theRET =exec_command (command) + ifRET[0]! =0: A returnFalse the returnTrue
Here, Python acts as a glue that uses the AppleScript output on the tool Cliclick and returns the result.
UI Automation test under MAC (iii)