Vs2015--win32 Some ideas for engineering configuration analyze performance while debugging in Visual Studio 2015

Source: Internet
Author: User
Tags image filter



Source:
https://msdn.microsoft.com/zh-cn/magazine/dn973013 (en-us). aspx



Many developers spend most of their time getting applications to function properly. Focus on the performance of your application in less time. Although there is a long time to analyze tools in Visual Studio, they are a separate set of learning tools. Many developers do not take the time to learn and use them when there are performance issues.



This article describes the new Diagnostic Tools Debugger window for Visual Studio 2015. It also describes how to use it to analyze performance as part of a periodic debug workflow. I will first provide an overview of the debugger's features and functionality, and then a deep dive walkthrough. I'll show you how to use PerfTips to code between breakpoints and steps in a time section, how to Use the Diagnostic Tool window to monitor CPU and memory, and how to take a snapshot to drill deep into memory growth and leaks.



The features in this article are available for debugging most managed and native projects. Microsoft continues to add more project types of support, debug configurations. For up-to-date information about currently supported features, check out the diagnostic Tools window in the blog post Aka.ms/diagtoolswindow. A separate article on this issue will explain how to use IntelliTrace in the Diagnostic tool window (see "Using IntelliTrace to diagnose problems faster,") to quickly determine the root cause of bugs in your code.



Performance at debug time
Instead of running a complete analysis tool, you may take one or more of the following steps:



Inserting code into an application, such as a system.diagnostics.stop-watch, measures the various points and, as needed, reduces the amount of time it takes for a hot path iteration to add a stopwatch to run.
Step through the code to see if any of the steps in particular are "feeling slow".
All ("Pause") buttons at random point to feel how far the implementation of the breakthrough has progressed. In some circles this is referred to as the "sample of the poor".
Over-optimized code does not measure performance, sometimes by applying a set of performance best practices throughout the code base.
These practices are usually inaccurate not by time or both. That's why there are now performance tools in the debugger. They will help you understand the performance of your application during normal debugging.



The Diagnostic Tools window you will notice that the New Diagnostic Tool window will appear when the Visual Studio 2015 debugging code, as shown in the main difference is figure 1. These diagnostic tools present information in two mutually complementary ways. They add graphics to the top half of the window in the timeline and provide the bottom of the Details tab.


In Visual Studio to 2015, you will see three tools in the Diagnostic Tool window: Debugger (including IntelliTrace), memory, and CPU usage. You can enable or disable the CPU usage and Memory Usage tool by clicking the Select Tool drop-down list. The debugger tool has shown three tracks that break events, output events, and IntelliTrace events.



Breaking historical events and PerfTips breaking events lets you see how long each section of code takes to run. The rectangle represents the duration from the application starting or resuming execution until the debugger makes it pause (see Figure 2).


The starting point of the rectangle indicates where you start to run the application by continuing the step Shift + F11 F11 F10) or by running to the cursor (Ctrl + F10) command (F5). The end of the rectangle indicates that because it hits the breakpoint, it completes a step or because you used to break all the applications that stopped the location.



The duration of the latest interrupt event is also displayed in the debugger at the end of the current line of code. This is called PerfTips. It allows you to monitor performance without taking your eyes off your code.



In the details table below the diagram, you can also see the history and the duration of the break event and PerfTips table format. If you have IntelliTrace, the attached event will appear in the table. You can also use a filter to display only the debugger to view the history of only breaking events.



The CPU and Memory analysis timeline is automatically selected within the time range, as you set breakpoints and single step. When a breakpoint is encountered, the current time range is reset and only the most recent interrupt event is displayed. The selection can be extended to include the latest interrupt events. You can override the automatic time range selection by clicking on a break event rectangle or by clicking and dragging on the timeline.



The time range selection allows you to correlate the CPU usage and memory usage graphs on the scope so that you can understand the CPU and memory characteristics of specific portions of the code. The diagram continues to update while the application is running, allowing you to help me look after the CPU and memory as you interact with your application. You can switch to the Memory tab, take a snapshot, and view a detailed breakdown of memory usage.



IntelliTrace Performance Insight IntelliTrace (not available in the Visual Studio Community release) gives you more insight when debugging managed code. IntelliTrace adds two tracks to the debugger event timeline: Output Trace and IntelliTrace track. These events include information displayed in the Output window, plus additional events collected by IntelliTrace, such as exceptions, ADO, and so on. Events that occur on these tracks are also shown in the Debugger event table.



You can refer to the IntelliTrace event spikes for CPU usage and memory usage graphs. Timestamps show you how often various actions are in your application. For example, you can add a Debug.WriteLine statement to your code to see how long it takes to run from one statement to the next using an output event on the timestamp.



Improve performance and memory
Now that you have seen the functionality of the window, we will delve into the actual use of the tool. In this section, we walk by solving a performance problem in a set of sample applications called Photo filters. The app will download images from the cloud and load images from the user's local picture library so he can view them and apply an image filter.



If you want to follow, download the source code from Aka.ms/diagtoolswndsample. Because performance is different on different machines, you will find different numbers. It will even be different from running.



Slow startup Application When you start the Debug Photo filter app, you'll find it takes a long time to launch the app and load the picture. This is an obvious problem.



When you debug an application's functional problems, you tend to form a hypothesis and start debugging on this basis. In this case, you can speculate that the picture is slowly loading, and look for a good place to set a breakpoint and test this hypothesis. The Loadimages method is a great place to do this.



Start and end with the Loadimages function set breakpoints (in the code in Figure 3) and start Debugging (F5). When the code hits the first breakpoint, press continue (F5) to run again to the second breakpoint. There are now two break events in the timeline in the debugger event.


The first step shows that the application runs only 274 milliseconds before hitting the first breakpoint. The second part shows the 10,476 ladies to run this code in the Loadimages function before hitting the second breakpoint. You can also see that the same value is displayed in the code after the time PerfTip. So you've narrowed it down to the loadimages function problem.



To get more information and how long each line takes, restart debugging so you hit the first breakpoint again. This time, step through each line of code in the method to see which rows are the longest. From PerfTips and debug the duration of the interrupt event, you can see Getimagesfromcloud flower 7,290 ms, Loadimagesfromdisk need 736 MS, LINQ query required 1,322 MS and rest within less than 50 milliseconds Complete.



For all the rows The timing is shown in Figure 4. The line number shows the line at the end of the big recess activity, so line 52 means how long it takes to take over the step line 51. Drill now further into the Getimagesfromcloud method.


The Getimagesfromcloud method performs two separate logical operations, as shown in Figure 5. It synchronizes the list of downloaded images (one at a time) from the server and thumbnails of each picture. You can do both by canceling your existing breakpoint and placing the new time on the following lines:


Restart the debugging process and wait until the application hits the first breakpoint. Then let the application be able to run (by pressing the F5 key to continue) to a second breakpoint. This allows the application to retrieve a list of pictures from the cloud. Then let the application run to a second breakpoint to measure the download of thumbnails from the cloud. PerfTips and breaking events tell you to spend 565 women get a list of pictures and 6,426 ms download thumbnails. The performance bottleneck is in the thumbnail you downloaded.



When you look at the CPU usage graph (shown in Figure 6), the method retrieves the image list and you can see that it is relatively high. The graph is fairly flat when it indicates that the process took a long time to wait for the network I/O thumbnail to download.


To minimize the waiting time between the client and the server, start downloading all of the thumbnails immediately, waiting for them to finish by waiting for the. NET system.tasks to complete. Replace line 73rd to 79th (from the code in Figure 5) with the following code:



 
// Download thumbnails var downloadTasks = new List<Task>(); foreach (var image in pictureList)
{ string fileName = image.Thumbnail; string imageUrl = ServerUrl + "/Images/" + fileName;
  downloadTasks.Add(DownloadImageAsync(new Uri(imageUrl), folder, fileName));
} await Task.WhenAll(downloadTasks);


When you do this new version, you can see that only 2,424 ladies are required to run. This is about four seconds of improvement.



Debug memory growth and leaks if you look at the memory usage graph to diagnose slow startup, you may have noticed a sharp increase in memory usage for starting the application. A list of thumbnails is a list of virtualization, and only a full-sized image is displayed for a period of time. One of the advantages of using a virtualized list is that it only loads content that is displayed on the screen, so you won't expect many thumbnails in memory at once.



To get to the root cause of this problem, you have to find out in the code that memory growth appears. Then, the snapshot was taken before and after the growth. By comparing these snapshots, you will find the type of object most helpful in memory.



The memory usage graph shows a high-level view of how your application uses memory. There are also counters that name private bytes for the performance of your application. Private bytes are the amount of memory that is allocated to the process. This does not include memory that is shared with other processes. It includes a managed heap, a native heap, a thread stack, and other memory (such as the private sector of a. dll file that is loaded).



When developing a new application or diagnosing problems with an existing one, the unexpectedly growing memory usage graph will often be the first sign that you have not behaved as expected in the code. Looking at the diagram, you can use debugger features such as breakpoints and stepping to narrow down the code path of interest. You can view the Debugger Events tab from the line number and duration to re-determine that figure 4 is responsible for the unexpected growth line that is the line 52,loadimagesfromdisk method call. Taking a snapshot is usually the next step in the case of unexpected memory usage. On the Memory tab, click the Take snapshot button to make a heap of snapshots. You can take a snapshot at a breakpoint or while the application is running.



If you know which line of code causes memory usage spikes, then you have an idea of where to take the first snapshot. You set a breakpoint on the Loadimagesfromdisk method and take a snapshot when your code arrives at that breakpoint. This snapshot is used as a baseline.



Next, step through the Loadimagesfromdisk method and generate another snapshot. Now, by comparing the snapshots, you will be able to see which managed types have been added to the result heap that you crossed for the function call. The graph again shows the memory utilization Spike is being investigated (as shown in Figure 7). You can also see the graphics memory 47.4 MB by hovering over the mouse. It's a good idea to mentally remember megabytes, so you can later verify that you fixed a meaningful impact.



There is a significant surge in memory usage


The details view shows a brief overview of each snapshot. The overview includes the sequence number of the snapshot, the time (in seconds) when the snapshot is taken, the size of the heap, and the number of objects on the live heap. Subsequent snapshots also show the size of the change and the object count from the previous snapshot.



The process of taking a snapshot lists only those objects that still live on the heap. That is, if the object is eligible for garbage collection, it is not included in the snapshot. In this way, you don't have to worry about when the collection finally runs. The data in each snapshot is as if a garbage collection has just occurred.



The display snapshot overview on the heap size will be lower than the private bytes shown in the memory usage graph. The Private bytes outline shows all types of memory allocated by your process, while the snapshot shows the size of all live objects on the managed heap. If you see a large increase in the memory usage graph, but the growth in the managed heap does not account for the majority of it, the growth occurs elsewhere in memory.



From the snapshot overview, you can open the heap view and investigate the content by type heap. Click the second snapshot to open the diff link in the Object (diff) column of the heap view in a new tab. Clicking the link will sort the number of new objects in the heap view by the type created from the previous snapshot. This makes you interested in the type of top table.



The heap view snapshot (see Figure 8) has two main sections: the table of object types in the reference graph in the top pane and the lower pane. The Object type table displays the name, quantity, and size of each object type when taking a snapshot.



Heap view Snapshot in Diff mode


Several of the heap view types are from the framework. If you have only my code enabled (the default), these are the types that are referenced in your code or referenced by your code type. Using this view, you can identify a type of code from our Table-photofilter.imageitem near the top.



In Figure 8, you can see that the Count Diff column shows the 137 new image objects created from the previous snapshot. The top five new object types all have the same number of new objects, so these may be relevant.



Let's take a look at the second pane, the reference graph. If you expect to clean up the garbage collector type, but it still appears in the Type table, the root path can help you keep track of what is holding the reference. The path to the root is one of the two views in the reference diagram. The path to the root is the bottom-up tree that displays the complete graph type rooting for the type you selected. If another application object holds a reference, the rooted object. Unnecessary root objects are often the cause of memory leaks in managed code.



The reference type, another view, is the opposite. For the Object type table, select this view to display other types of types that reference the type that you selected. This information can help determine why objects in more memory persist than expected for the selected type. This is useful for investigating the status quo, because types may use more memory than expected, but they are no more useful than they are.



Select the Photofilter.imageitem row in the Object type table. The reference graph is updated to show the image of the diagram. In the reference type view, you can see that the image object retains a total of 280 string objects and 140 of each frame's three types: StorageFile, Storageitemthumbnail, and BitmapImage.



The total size makes it seem as if contributing the largest string object to the memory reserved by the image object. Focusing on the total size of the Diff column makes sense, but the number does not cause the root cause. Some framework types, such as BitmapImage, have a very small amount of memory held on the managed heap. The number of BitmapImage instances is a more convincing clue. Remember that the list of thumbnails in the photo filter is virtual so it should load the requirements on those images and make it available as garbage collection when it does. However, it looks as if all thumbnails are loaded ahead of time. Combine what you now know about BitmapImage objects being icebergs, and continue to focus on those conducting the investigation.



Photofilter.imageitem in the reference diagram, right-click and select Go To definition image to open the source file in the editor. The image defines the member field, M_photo, which is BitmapImage, as shown in Figure 9.



Code references member fields of M_photo


The first code path that references M_photo is a photo of the property's Get method, which is the property of the data binding to the ListView thumbnail in the UI. It looks like BitmapImage is loading (and therefore decoding on the native heap) on demand.



The second code path referencing M_photo is the function of Loadimagefromdisk. This project is the startup path to the application. When the application starts, it gets a call to be displayed for each image. This effectively preload all objects of the BitmapImage. This behavior is detrimental to virtual list views, as all memory is allocated, regardless of the thumbnail of the image displayed in the list view. The preload algorithm does not scale well. The more pictures you have in your picture library, the higher the cost of starting up memory. Loading BitmapImage objects On demand is a more scalable solution.



After you stop the debugger, comment out the lines 81 and 82 in the Loadimagefromdisk load BitmapImage instance. To verify that you have fixed memory performance issues without disrupting the functionality of your application, then rerun the same experiment.



By pressing F5, you will see that total memory usage is now only 26.7 MB (see Figure 10). Call Loadimagesfromdisk before, after a snapshot of another group, and then compare them. You will see that the image object is still 137, but there is no bitmapimages (see Figure 11). Bitmapimages will load the requirements once you have the application continue to boot.



Memory graphs in Fixed reference issues


Fixed memory problem after reference graph


As mentioned earlier, this debugger integrates tools that also support taking pictures of the native heap or managed and native heaps simultaneously. Heap your configuration files based on the debugger that you are using:



Only the managed debugger requires a snapshot of the managed heap.
Native debuggers only (default values for native projects) require only native heap snapshots.
The mixed-mode debugger requires both managed and native heap snapshots.
You can adjust this setting on the debug page of your project properties.



When the debug Run tool is not enabled
It is important to mention the additional overhead that is introduced when you measure performance with the debugger. The overhead of the main class comes from the fact that you typically run the debug version of the application. The application that you publish to the user will publish the version.



In debug builds, the compiler keeps the executable file as close to the original source code as possible for the structure and behavior. As you would expect when debugging, everything should work. On the other hand, the release version tries to optimize the way the code degrades the performance of the debugging experience. Some examples include function calls and constant variables in the lining, removal of unused variables and code paths, and storage variable information that may not be readable by the debugger.



All of this means that CPU-intensive code can sometimes run significantly slower in debug builds. Non-CPU-intensive operations, such as disk I/O and network calls, will take as much time. Usually not a memory behavior, meaning that leaking memory will leak and inefficient memory usage will still be shown as a large increase in both cases.



When it is attached to the target application, it is considered by the debugger to add additional overhead classes. The debugger intercepts module loading and exception events. It also provides other required work for you to set breakpoints and steps. Visual Studio tries to filter this type of overhead from a performance tool, but still has a small amount of overhead.



If you see an issue in the release version of an application, it will almost always replicate in debug builds, but not necessarily in the other way around. For this reason, the Debugger integration tool is designed to help you proactively identify performance issues during development. If you find problems in debug builds, you can turn to the release version to see if this issue affects the release version. However, you may decide to move forward and fix the problem in a debug build if you decide that this is a good precaution against performance (that is, fix the chance of a performance problem later in the problem) if you determine that the problem is non-CPU intensive (disk or network I/O), or if you want to speed up the debug version, So in the development process, your application is fast.



When you report a performance issue in a release build, you want to make sure that you can copy and verify that you have resolved the issue. The best way to do this is to flip the tools that you run in the environment that matches the reporting problem for the build of the release mode that does not use the debugger.



If you want to measure the business duration, the debugger-the integrated tool will only be accurate to within dozens of milliseconds of less overhead. If you need a higher level, running without the debugger tool is a better option for accuracy.



Summarize
You can try out the New Diagnostic Tool debugger window for Visual Studio 2015 by downloading Visual Studios RC. Using these new integrated debugging tools can help you improve performance as you debug your application.



Vs2015--win32 Some ideas for engineering configuration analyze performance while debugging in Visual Studio 2015


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.