Microsoft Azure Automation Testing

Source: Internet
Author: User
Tags management studio git shell

When using tools that interact with Microsoft Azure and trying to measure performance, it's virtually impossible to get anything like fair, consistent testing. A set of timings for a test run during lunch time can be quite different from the test that everyone leaves the office to perform at night. The result depends on the network traffic around you, and it can even be less common every day.

The Cerebrata team is currently testing and wants to increase the speed of transmission to Microsoft Azure BLOB storage and from Microsoft Azure blob storage, and we are trying to get whatever data we want to rely on. Suggested workaround: Run your tests in Azure virtual machines and install them in the same datacenter as our blob storage. Regardless of the expansion of the imagination is not perfect, because all control resources are revoked, but it is best to run tests from our office. In this way, the main reason that our results are different-inconsistent connections-is unlikely to be a problem, thanks to less-than-destination hop and the ability to leverage the high-bandwidth connections of the data center only.

Of course, it's a good idea to decide to run tests on Microsoft Azure virtual machines, but what happens if you want to run them more than once? Multiple remote-to-VM start-up tests in a day can quickly become tedious, wasting time every time you forget to start a test. Also, if the test is still under development and you have to add more tests every day, you have to find a way to get the files you need on the virtual machine, and you don't mind running the virtual machine every time you update. When all this is done, where is your result? On a virtual machine? That's another thing!

We know that we run these tests several times a day in two months, so we want to automate this process as much as possible. But if you've done this for a few weeks, there's no need for these tests, so we're not interested in building servers, deploying Azure virtual machines, or similar events. In fact, all automation is just in the VM itself, and that's how we implement it.

Set VMS

The limiting factor for performance is always bandwidth, except that there is no special VM requirement. In Azure, the size of the virtual machine you're running has an impact on the amount of bandwidth assigned to your physical network card (NIC)-and, of course, the larger the VM you choose, the more you'll be allocated. However, if other virtual machines on the same physical machine are not busy using the NIC, you may get some extra throughput. This should be noted, especially for tests like ours.

We are in a very unique position because we have chosen different options for specifying performance to run the conditional code triggered by the compile-time conditional compilation symbol. This means that our VMS need to be able to build C # code during the test run, for which we need to install the Windows SDK. Run the test using NUnit, so this is installed and the VM is ready to run the test. Nothing is automated at this point, but we will soon come to the conclusion.

Collect test Results

At the beginning, the output from each test is written to the console and piped to a text file on the command line, if needed. This is not done in automation, which also leaves the results on the virtual machine, which means that they are manually acquired at some point. The first step is to write each test result to a file and write the output to the console, so

We only get one result file per test. This still leaves the text file on the VM, so we decided to upload the files to blob storage, including determining their test name and date. With the Free Azure Resource Manager tool, we now have access to all the result files, even if the virtual machines are shut down or destroyed, which is basically the same as these files on the local computer.

Progress has been made, but it is unrealistic to read hundreds (possibly thousands) of text files that are all timed. Quickly becomes very cumbersome, so need to do some processing. For this, we write the application, which parses each text file and merges the same test results, then calculates the average of the timing. This gives us the average of each test configuration and transfers the file size, which uses the highest performance setting in ascending order to render options in an easy-to-read format. These summary files are also uploaded to blob storage and are updated as soon as the results-analysis application is run.

The last part of the data we want to see often is the timing distribution for specific configuration options or SET options. This is done by analyzing the personal results file here, which is the basis for our custom charts, which is to import the timing data into Excel. Similarly, Excel files are uploaded to blob storage for greater resiliency and ease of access.

The result of this run at the end of each set of tests-the profiling application, which gives us a personal result file to drill down into the performance run, a summary of all running runs, and a spreadsheet with data for cartography-all of which are easy to implement in blob storage.

Automation

This is probably the most uninteresting part of the whole process, because we keep things simple, using only batch files that include Windows Scheduler. As mentioned earlier, virtual opportunities create tests and libraries, so the source code is presented. The batch file loops through different configuration options, builds each option with the specified compilation symbol, and then runs the test. After this cycle is completed, a result handler is built and run to produce our summary results file and Excel spreadsheet.

Get Code Changes

Initial setup getting our benchmark run is pretty fast, and given that the test takes a few hours to complete, the more data we have, the better we start running as soon as possible and adding different configurations. The problem we still need to address is how to update the VMS with the latest source files before each test run. Remote access to the VM to check that the previous test run is complete, and manually retrieving the source files from the local machine, makes the rest of the automation almost useless.

Fortunately, the source code for these performance tests is stored on GitHub, and obviously VMs are already connected to the Internet, so we can easily destroy the changes. To do this automatically, another batch file creation is created, and it will use the GIT shell to destroy the latest changes and run the previously created batch file to keep the test going. As long as the second managed pull batch file is not modified, we are not logged on to the virtual machine, and the test always runs with the latest check-in changes.

Switching Windows Scheduler to start this new batch file is easy (rather than starting the old file), and we have options on how to schedule and manage the test cycle. We can add a check to ensure that the previous test run is complete before starting the scheduler, we can choose to run only on a schedule, or we can choose to run the entire process again as soon as possible after the previous run has completed.

Wait, this environment doesn't represent our user base.

Automation is fully set up, running well, and ready for results. The larger the block size and the larger the number of parallel block uploads, the faster it becomes apparent to see the fastest time, but this is not surprising for virtual machines with huge bandwidth. This obviously does not represent the general usage of our customers, because most of them are transferring files from the local network, which is not comparable to our azure virtual machines.

We want Azure Management Studio to be able to transfer files quickly, not just to provide users with wonderful connections. We need to run our tests in a limited bandwidth environment to see if our findings are the same. To achieve this, we use the Windows Network Emulator Toolkit-a software emulator that can set bandwidth, packet loss, errors, wait times, and so on. It's not the most beautiful app, but it's very useful. Leaving it running on a VM allows us to run tests in the specified bandwidth and network environment to better simulate the customer's real environment.

Summarize

This is a quick and easy way to get some automated tests running in azure, and it works well, mostly because it only takes two months. Since we're just running several virtual machines for a month or two, the cost is not a real problem, so we don't feel the need to shut down the virtual machine during inactivity. We can easily do this by shutting down the virtual machine by using a batch file after the test is complete, and then using our build server to run the PowerShell script to restart it to prepare for the next round of testing.

The work is already in progress, and now we can also take advantage of the new Microsoft Azure Automation feature, which is currently in preview.

Again, because this is a test that is long tested or distributed across multiple virtual machines, we are bound to involve building servers locally and writing infrastructure to create virtual machines on demand, run tests remotely, and graphically display test results. With the addition of the new optional VM agent and other puppet and chef extensions on Microsoft Azure virtual machines, there is also a great chance of creating a wide range of automated test infrastructures in azure, especially if you invest in something you know, and you're going to use it for a while.

Microsoft Azure Automation Testing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.