Some ideas for Vs2015--win32project configuration analyze performance at the same time that you debug in Visual Studio 2015

Source: Internet
Author: User
Tags diff image filter

Source: (en-us). aspx

Many developers spend most of their time getting application talent to function properly. Focus on the performance of your application in less time.

Despite having a very long time analyzing tools in Visual Studio, they are a separate set of learning tools. Many developers do not take the time to learn and use them when there are performance issues.

This article describes the new diagnostic tools debugger form for Visual Studio 2015.

It also describes how to use it to analyze performance as part of a regular debugging workflow.

I will first provide an overview of the debugger's features and functionality, and then a deep dive walkthrough. I'll show you how to use PerfTips to code between breakpoints and steps in a time section, how to use the diagnostic tools form to monitor CPU and memory, and how to take snapshots to drill deep into memory growth and leaks.

The features in this article are available for debugging most managed and native projects.

Microsoft is constantly adding support and debugging configurations to many other project types.

For up-to-date information about currently supported features, check out the diagnostic tools form in the blog post A separate article on this issue will explain how to use IntelliTrace in the Diagnostic tool window (see "Using IntelliTrace to diagnose problems faster") to determine the root cause of the bug in your code at high speed.

Performance at debug time
Instead of executing a complete analysis tool. You may take one or more of the following steps:

Inserting code into an application (such as a system.diagnostics.stop-watch) to measure various points, depending on the need to reduce the length of time required for a hot path iteration to join the stopwatch.

Stepping through the code. Take a look at the assumption that no matter what the single step is "feeling slow".
All ("pause") button at random point to feel how far to perform the breakthroughs that have progressed.

In some circles this is referred to as the "sample of the poor".
Over-optimized code does not measure performance, sometimes by applying a set of performance best practices throughout the code base.
These practices are generally inaccurate not by time or both.

That's why there are now performance tools in the debugger. They will help you understand the performance of your application during normal debugging.

Diagnostic tools form you will notice that the new diagnostic tools form will appear when you debug code in Visual Studio 2015. As seen in the main difference is in Figure 1. These diagnostic tools present information in two mutually complementary ways. They add graphics to the top half of the form in the timeline. and provide specific information in the tabs at the bottom.

Figure 1 The new diagnostic tools form in Visual Studio 2015

In Visual Studio to 2015, you will see three tools in the Diagnostic tools form: Debugger (contains IntelliTrace), memory, and CPU usage.

You can enable or disable the CPU usage and Memory Usage tool by clicking the Select Tool drop-down list. The debugger tool has shown three tracks that break events, output events, and IntelliTrace events.

Breaking historical events and PerfTips breaking events lets you see how long each section of code takes to execute.

The rectangle represents the duration from which the application starts or resumes execution until the debugger makes its pause (see Figure 2).

Figure 2 Break events and PerfTips

The starting point of the rectangle indicates where you start to execute the application by continuing the step Shift + F11 F11 F10) or by executing to the cursor (Ctrl + F10) command (F5). The end of the rectangle indicates that because it hits a breakpoint, it finishes a step or because you used to break the entire application where it stopped.

The duration of the latest interrupt event is also displayed in the debugger at the end of the current line of code.

This is called PerfTips. It agrees that you monitor performance without taking your eyes off your code.

In the details table below the diagram. You can also see history and the duration of breaking events and PerfTips table formats. Assuming you have IntelliTrace, the attached event is displayed in the table. You can also use a filter to display only a debugger to see the history of only breaking events.

The CPU and Memory analysis timeline itself actively selects within the time range, as you set breakpoints and step.

When a breakpoint is encountered, the current time range is reset. Only the latest interrupt events are displayed.

The selection can be extended to include the latest interrupt events. By clicking on a break event rectangle or by clicking and dragging on the timeline. You can override your own active time range selection.

The time range chooses to agree with the CPU usage and memory usage graphs on the scope of your association. So that you can understand the CPU and memory characteristics of a particular part of the code.

The diagram continues to update when the application executes, allowing you to help me look after the CPU and memory as you interact with your application. You can switch to the Memory tab. Take a snapshot and see a specific breakdown of memory usage.

IntelliTrace Performance Insight IntelliTrace (not available in the Visual Studio Community version number) allows you to gain a lot of additional insight when debugging managed code.

IntelliTrace adds two tracks to the debugger event timeline: Output Trace and IntelliTrace track. These events contain information that is displayed in the output form. Plus additional events to collect the IntelliTrace, such as exceptions, ADO, and so on. Events that occur on these tracks are also seen by the debugger event table.

You can refer to the IntelliTrace event spikes for CPU usage and memory usage graphs. Timestamps show you how often various actions are in your application. For example, you can add a Debug.WriteLine statement to your code and use the timestamp on the output event to see how long it will take from one statement to the next.

Improve performance and memory
Now. You have seen the functionality of the form. We will delve into the tools of practical use. In this section. We walk by solving a set of performance problems in an application called photo filters. The app will download images from the cloud and load images from the user's local picture library so he can view them. and apply an image filter.

Suppose you want to follow, download the source code from Because the performance is different on different machines. You will find different numbers.

It will even vary from execution.

Slow start Application When you start the Debug Photo filter app, you'll find it takes a very long time to launch the app and load the picture.

This is an obvious problem.

When you are debugging an application's functional problems, you tend to form a hypothesis and start debugging on this basis. In such a case, you can guess that the picture is slowly loaded. And find a good place to set a breakpoint and test this hypothesis.

The Loadimages method is a great place to do this.

Start and end with the Loadimages function set breakpoints (see the code in Figure 3) and start Debugging (F5).

When the code hits the first breakpoint, press continue (F5) to run again to the second breakpoint.

There are now two break events in the timeline of debugger events.

Figure 3 Loadimages method

The first step shows that the application executes only 274 milliseconds before hitting the first breakpoint. The second part shows the 10,476 ladies to execute this code in the Loadimages function before hitting the second breakpoint. You can also see that the same value is displayed in the code after the time PerfTip. So you've narrowed it down to the loadimages function problem.

To get a lot of other information and how long each line takes, start debugging again so you hit the first breakpoint. This time, step through each line of code in the method to see which rows are the longest.

From PerfTips and debug the duration of the interrupt event, you can see Getimagesfromcloud flower 7,290 ms, Loadimagesfromdisk need 736 MS, LINQ query required 1,322 MS and the rest within less than 50 milliseconds Complete.

See Figure 4 for the full line timing.

The line number shows the line at the end of the big recess activity, so line 52 means how long it takes to take over the step line 51. Now drill further into the Getimagesfromcloud method.

Figure 4 Each step of the debugger event table shows elapsed time

The Getimagesfromcloud method performs two independent logical operations, as seen in Figure 5. It synchronizes the list of downloaded pictures (one at a time) from the thumbnails of the server and each picture.

You can do both by canceling your existing breakpoint and placing the new time on the following lines:

Figure 5 Improved code () and Getimagesfromcloud method (top)

Start the debugging process again and wait until the application hits the first breakpoint. Then let the application be able to execute (by pressing the F5 key to continue) to the second breakpoint. This agrees that the application retrieves a list of pictures from the cloud.

Then let the application execute to a second breakpoint to measure the download thumbnail from the cloud. PerfTips and breaking events tell you to spend 565 women get a list of pictures and 6,426 ms download thumbnails.

The performance bottleneck is in the thumbnail you downloaded.

When you look at the CPU usage graph (see Figure 6), the method retrieves the image list. You can see that it is relatively high. The graph is fairly flat when it indicates that this process took a very long time to wait for the network I/O thumbnail to download.

Figure 6 CPU Usage graph indicates delayed network input/output

To minimize the waiting time between the client and the server, start downloading all of the thumbnail operations immediately. Wait for them to finish by waiting for the. NET system.tasks to complete.

Instead of line 73rd to 79th (Figure 5 from the code), use the following code:

// Download thumbnailsvarnew List<Task>();foreach (varin pictureList){  string fileName = image.Thumbnail;  string"/Images/" + fileName;  downloadTasks.Add(DownloadImageAsync(new Uri(imageUrl), folder, fileName));}await Task.WhenAll(downloadTasks);

When you do this new version number, you can see that only 2,424 women are required to do it. This is about four seconds of improvement.

Debug memory growth and leaks suppose you look at the memory usage graph when diagnosing slow startup, you may have noticed a sharp increase in memory usage to start the application. A list of thumbnails is a list of virtualization, and only a full-sized image is displayed for a period of time.

One of the strengths of using a list of virtualization is that it only loads content that is displayed on the screen, so you won't expect very many thumbnails in memory at once.

To get to the root cause of this problem, you have to find out in the code that memory growth appears. And then. The growth before and after the snapshot is taken.

Compare these snapshots. You will find the type of object that is most helpful in memory.

The memory usage graph shows a high-level view of how your application uses memory. There are also counters that name private bytes for the performance of your application. Private bytes are the amount of memory that is allocated to the process. This does not include memory that is shared with other processes. It contains the managed heap, the native heap, the thread stack, and other memory (such as the private sector of the. dll file that is loaded).

When developing a new application or diagnosing problems with an existing one, the unexpectedly growing memory usage graph will often be the first sign that you have not behaved as expected in the code.

Looking at the diagram, you can use debugger features such as breakpoints and stepping to narrow down the code path of interest. You can display the Debugger Events tab from the line number and duration again to determine that figure 4 is responsible for the unexpected growth line 52. Loadimagesfromdisk method Invocation.

Taking snapshots is typically the next step in the case of unexpected memory usage.

On the Memory tab, click Take Snapshot button to make a heap of snapshots. At the breakpoint or when the application is executing. You can take a snapshot.

Let's say you know which line of code causes memory usage spikes. Then you have an idea of where to take the first snapshot. You set a breakpoint on the Loadimagesfromdisk method and take a snapshot when your code arrives at that breakpoint. This snapshot is used as a baseline.

Next, step through the Loadimagesfromdisk method, and generate a snapshot. Now, by comparing the snapshots, you will be able to see which managed types have been added to the result heap that you crossed for the function call. The graph again shows the memory utilization Spike is being investigated (as seen in Figure 7). You can also see 47.4 MB by hovering over the graphics memory.

It's a good idea to mentally count megabytes. So you can later verify that you fixed a meaningful impact.

There is a significant surge in memory usage

Figure 7 has a significant surge in memory usage

The specific information view displays a brief overview of each snapshot. An overview contains the sequence number of the snapshot, taking a snapshot of the time (in seconds) that was executed. Heap the size and number of objects on the live heap. Maybe the snapshot will also show the size of the change and the object count from the previous snapshot.

The process of taking a snapshot lists only those objects that still live on the heap. That is, assume that the object is eligible for garbage collection. Are not included in the snapshot.

Such a way. You don't have to worry about when the collection finally runs. The data in each snapshot is. As if a garbage collection had just occurred.

The display snapshot overview on the heap size will be lower than the private bytes shown in the memory usage graph. The Private Bytes Summary shows all types of memory allocated by your process. The snapshot shows the size of all live objects on the managed heap. Suppose you see it in the memory usage graph. A large number of additions, but growth in the managed heap does not account for the majority of it, growing in memory elsewhere.

From the snapshot overview, you can open the heap view and investigate the content by type heap. Click the second snapshot to open the diff link in the Object (diff) column of the heap view in a new tab. Clicking the link will sort the number of new objects in the heap view by the type created from the snapshot that was taken. This makes you interested in the type of top table.

The heap view snapshot (see Figure 8) has two main sections: the table of object types in the reference graph in the top pane and the lower pane. The Object type table displays the name, quantity, and size of each object type when taking a snapshot.

Heap view Snapshot in Diff mode

Figure 8 Heap View snapshot in Diff mode

Several of the heap view types are from the framework.

Suppose you have only my code enabled (the default value). These are types that are referenced in your code or that are referenced by your code type. Using this view, you can identify a type of code from our Table-photofilter.imageitem near the top.

In Figure 8. You can see that the Count Diff column shows the 137 new image objects created since the snapshot was taken. Top five new object types have the same number of new objects, so these may be relevant.

Let's take a look at the second pane. Reference map. Suppose you expect to clean the garbage collector's type, but it still appears in the Type table, and the root path helps you keep track of what's holding the reference.

The path to the root is one of the two views in the reference map. The path to the root is the bottom-up tree that displays the complete graph type rooting for the type you selected. Suppose there is also an Application object that holds a reference to the rooted object.

Unnecessary root objects are often the cause of memory leaks in managed code.

The type of the reference, there is also a view. The opposite.

For the Object type table, select this view to show that other types of types reference the type that you selected. This information can help to determine why objects in many other memory persist over the selected type than expected. This is a practical survey of the status quo. Because the type may use a lot of other memory than expected. But they are no more useful than they are.

Select the Photofilter.imageitem row in the Object type table. The map will be updated to show the image of the diagram. In the reference type view. You can see that the image object retains a total of 280 string objects and 140 of each frame's three types: StorageFile, Storageitemthumbnail, and BitmapImage.

The total size makes it appear as if contributing the largest string object to the memory that is reserved by the image object. Focusing on the total size of the Diff column makes sense, but the number does not cause the root cause.

Some framework types, such as BitmapImage, have only a very small amount of memory held on the managed heap.

The number of BitmapImage instances is a more convincing clue. Remember that the list of thumbnails in the photo filter is virtual so it should load the requirements on those images and make it available as garbage collection. When it does. However, it seems as if all thumbnails are loaded in advance. Combine what you now know about what BitmapImage objects are being icebergs, and continue to focus on those conducting the investigation.

Photofilter.imageitem in the reference map. Right-click and select Go To definition image to open the source file in the editor.

The image defines the member field, M_photo, which is BitmapImage, as seen in Figure 9.

Code references member fields of M_photo

Figure 9 Code references member fields of M_photo

The first code path that references M_photo is a photo of the property's Get method, which is the property of the data binding to the ListView thumbnail in the UI. It looks like the requirement that BitmapImage is loading (and therefore decoding on the native heap).

The second code path referencing M_photo is the function of Loadimagefromdisk. This project is the startup path of the application.

When the application starts, it gets a call to be displayed for every image.

This effectively pre-loads all the objects in the BitmapImage. This behavior is detrimental to the virtual list view, as the entire memory allocated, regardless of the thumbnail image displayed in the list view. The pre-load algorithm does not scale very well. A lot of other pictures that you have in your picture library. The higher the cost of startup memory.

Loading BitmapImage objects On demand is a more scalable approach.

After you stop the debugger. Please stare at the drop lines 81 and 82 in Loadimagefromdisk to load the BitmapImage instance. To verify that you have fixed memory performance issues without disrupting the functionality of your application. And then perform the same experiment again.

By pressing F5, you will see that total memory usage is now only 26.7 MB (see Figure 10).

Call Loadimagesfromdisk before and after a set of snapshots, then compare them. You will see that the image object is still 137, but there is no bitmapimages (see Figure 11). Bitmapimages will load the requirements once you have the application continue to boot.

Memory graphs in Fixed reference issues

Figure 10 Memory graph in Fixed reference problem

Fixed memory problem after reference map

Figure 11 Fixed memory problem after reference diagram

As mentioned earlier, this debugger-integrated tool also supports taking pictures of the native heap or managed and native heaps at the same time. Heap your configuration files based on the debugger that you are using:

Only the managed debugger requires a snapshot of the managed heap.
Native debuggers only (default values for native projects) require only a native heap snapshot.
The mixed-mode debugger requires a managed and native heap snapshot.
You can adjust this setting on the debug page of your project properties.

When the debug Execution tool is not enabled
It is important to mention the additional overhead that is introduced when you measure the performance with the debugger. The cost of the main class comes from the fact that you typically execute the debug version number of the application.

The application that you advertise to the user will advertise the version number.

In the debug version number. The compiler keeps the executable file as close to the original source as possible for the structure and behavior. As you would expect when debugging, everything should work. There is one more aspect. The publication version number attempts to optimize the performance of the code in a way that reduces the debugging experience. Some examples include function calls and constant variables in the lining, removal of unused variables and code paths, and storage variable information that may not be readable by the debugger.

All of this means that CPU-intensive code can sometimes perform significantly slower in the debug version number. Non-CPU-intensive operations. such as disk I/O and network calls will take as much time. Usually not a memory behavior, meaning that leaking memory will leak and inefficient memory usage will still be displayed as a large number of additions to the two cases where the difference is very large.

When it is attached to the target application, it is considered by the debugger to join the other overhead classes. The debugger intercepts module loading and exception events. It also provides other required work for you to set breakpoints and steps.

Visual Studio tries to filter the cost of such a type from a performance tool, but still has a small amount of overhead.

Suppose you see an issue in the publication version number of an application. It will almost always replicate in the debug version number, but not necessarily in the vicinity of other ways. For this reason, the Debugger integration tool is designed to help you proactively identify performance issues during development. Assuming that you find the problem in the debug version number, you can turn to the publication version number. Look at the assumption that this problem affects and publishes the version number.

However, you may decide to move forward and fix the problem in the debug version number, assuming you decide that this is a good precaution (that is, fix the problem to reduce the chance of encountering performance problems later). Suppose you determine that the problem is non-CPU intensive (disk or network I/O), or if you want to speed up the debug version number, your application is fast during development.

When you report a performance issue in the publication version number, you want to make sure that you are able to copy and verify that you have overcome the problem. The best way to do this is to flip the tools that you perform in an environment that matches the reporting problem for the build of the publication pattern that does not use the debugger.

Assuming you want to measure the business duration, the debugger-the integrated tool will only be accurate to dozens of milliseconds less system overhead.

Suppose you need a higher level. Executing without the debugger tool is a better choice for accuracy.

You can try out the new Diagnostic Tool debugger form for Visual Studio 2015 by downloading Visual Studios RC. Using these new integrated debugging tools can help you improve performance as you debug your application.

Some ideas for Vs2015--win32project configuration analyze performance at the same time that you debug in Visual Studio 2015

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.