. Net + SQL Server Enterprise Application Performance Optimization Note 1-Research

Source: Internet
Author: User

After a period of research, we started to optimize the performance this week. Before optimization, I will summarize some of the information I have investigated.

1. Background

This is a system developed in the past 03 years. NET 1.1 + SQL Server 2000, the operating system uses Windows2003. After so many years of use, the operating system was upgraded (from Windows2000 to 2003 at that time) and system maintenance ,. the. Net Environment and Database have not changed. Because the system records several years of data and some tables have millions of rows of data, the system is currently running very slowly due to lack of indexing and system program considerations, performance optimization is required. I have now started to use VS2008 to develop and use SQL Server 2008 databases. Suddenly I was a little uncomfortable with receiving such an old project. I haven't touched SQL2000 for a long time.

2. Hardware

In the hardware environment, the customer's server is still very good. One Web server and one Database Server are all 4 dual-core CPUs, 8 GB memory, 1 GB network, and gb raid 5 hard disks, in general, it feels good. When I learned that the customer was 8 GB of memory, my first reaction was that the customer was definitely wasting memory! As a result, the system has 8 GB of memory, but the total memory used by all programs is about 2 GB, and the system has been unable to go up. Why? Because the customer uses a 32-bit operating system, it can only support 4 GB of memory by default, 2 GB is used for the operating system, and 2 GB is used for applications, in the task manager, you can see that SQL Server only occupies about GB of memory and cannot occupy more memory. SQL Server is a program for processing large amounts of data. The memory speed is much faster than that of the hard disk. If the data to be processed is in the memory, it will be much faster than reading the hard disk for processing, therefore, the more memory occupied by SQL Server, the better. To break through the 32-bit operating system's 2 GB memory limit for applications, you can enable the 3 GB switch, change the memory usage of the operating system to 1 GB, and change the memory usage of applications to 3 GB. Of course, 8 GB memory is used here, so it is not enough to enable the 3 GB switch. here we need to enable the APE switch of the system and use the AWE function of SQL Server. Another solution is to change to a 64-bit operating system and SQL Server.

3. Data Access

I simply reviewed the program code. The system is a BS program and has a three-tier architecture. The database access mainly uses SqlHelper to call stored procedures and SQL statements, then, use DataReader to return an object or a collection of objects. Because the system has been completed in constant demand changes, it has also made a common problem in many projects. In order to catch up with the time, everyone will ignore the code specifications and architecture, and some will use the stored procedure, some use SQL statements, some write SQL statements directly at the UI Layer, then pass the SQL statements to the business layer, and finally to the data layer for execution. This is not the first time I have seen such a program. This Code also appeared in a project I started my college internship, so I feel understandable.

In the code of the Review data access layer, I saw a lot of code for reading a DataReader to fill an object:

Private void fillRegionLevel (RegionLevelMod level, SqlDataReader reader)
{
// Administrative Code
If (! Reader. IsDBNull (0 ))
Level. LevelCode = reader. GetString (0 );
// Administrative level name
If (! Reader. IsDBNull (1 ))
Level. LevelName = reader. GetString (1 );
// Administrative level description
If (! Reader. IsDBNull (2 ))
Level. LevelDescribe = reader. GetString (2 );
}

In this way, there will be no errors in writing the program, but the whole program is too dependent on the order of the objects returned by the database. If the database is modified and one column is provided at the beginning, then reader. getString (0) will read other columns in a misplaced manner, and all subsequent reads will also be misplaced, so expansion is relatively troublesome. My personal suggestion is to use the column name to read the content in DataReader, for example:

If (Convert. IsDBNull (reader ["Code"])
{
Level. LevelCode = reader ["Code"]. ToString ();
}

Of course, it does not need to be so troublesome. If an object is filled with DataReader in the project, I usually use the Reflection Method to add Attribute for each field when defining the object class, then, you can map fields in the class to the columns returned by DataReader by using a uniform method through reflection. For more information, see the sample code:/Files/studyzy/LoadDbToObjectDemo.rar.

Another problem is that for dictionary tables (such as prefecture-city regions and types), cache should be used to store data in the memory of the Web server, and the database does not need to be read every time.

4. Charts

A large number of charts need to be used in the system. For this reason, the developer writes a server control by setting each attribute, passing in data, and finally executing DataBind. It is quite good to abstract a control here and reuse the code functions. The overall Chart is presented as follows:

(1) The system passes the data into the Chart control and executes its DataBind method. The control uses GDI + for plotting.

(2) Save the drawn image to a folder on the server hard disk based on the current time and other attributes.

(3) combine the path of the drawn graph with the code to Render the HTML code.

This function is implemented, but there are also several problems:

  • The Chart is drawn on the server. If a large number of charts need to be displayed, it will increase the burden on the server.
  • The Chart is saved to the hard disk and then pointed to the hard disk address in the img label. I/O operations on the hard disk were performed twice for the image.
  • The reading and drawing of the entire Chart data is the same thread as Page_Load. If a Chart takes a long time to read or draw data, the response to the entire page is blocked.

After my actual tracking test, I found that, if it takes five seconds to draw a Chart, the opening time of a page must be more than five seconds, the entire system will respond. We can use the following solutions to the above problems:

  • To transfer the Chart from the server to the client, use Flash or SilverLight (you can also use JavaScript to draw the Chart ). The client only needs to download the Flash, and the system will transfer the data set of the Chart to Flash through XML, AMF, and other methods, and then the Flash will be responsible for drawing the entire Chart. Flash can reduce the load on the server, provide the speed of accessing the page, and improve the user experience.
  • I/O operations on the hard disk are not performed. After the Chart is drawn on the server, you can directly use a page Response.
  • Read and draw the Chart data on another page. to display the Chart page, you only need to output . All data reading and drawing operations will be performed on the Chart. aspx page. In this way, it takes five seconds to draw a Chart. Because the page thread does not execute the drawing, it can be returned quickly. the browser will request the Chart only after loading the page. the aspx page.

Here I personally recommend the first method, which can be considered in the new version of the system. However, for the optimization of the existing system, of course, such a major change cannot be made. Just use the third point.

5. HTML

The homepage of the system has performance problems. It takes a long time to open the system each time. If the network condition is poor, it will take a long time to wait. You can use FireBug in FireFox to track the size and response time of all resources requested when the homepage is opened, or use the network packet capture tool for analysis. After my packet capture analysis, I found that the homepage contains about KB of HTML, which is not very big, and there are not many referenced images, CSS, and JS, however, the overall response is slow. Besides a large number of database operations, the response is closely related to HTML. View its HTML and find the following problems:

  • There is a large ViewState in HTML, but the homepage is mainly read-only data binding, so many controls such as DataGrid can close ViewState.
  • HTML uses the storage Table splicing method. This method will cause the system to render the entire large Table after loading. You can use the div + css method instead, in this way, each obtained DIV can render a piece of content.
  • Without AJAX technology, many content on the page is time-consuming and can be asynchronously bound through AJAX.

6. Database

The database is the focus of my optimization. Because it is a database of SQL Server 2000, there is no DMV, no performance monitor for SSMS, no indexes included ...... There are too many useful functions, which is very inconvenient. In general, I think the optimization of database performance is mainly in three directions:

  • Optimize the index and create the index to Remove unused indexes.
  • Rewrite the query to conform to the SARG.
  • Reduces congestion and resource waiting to avoid deadlocks.

My next job is to focus on these three directions. An essential tool for SQL Server performance optimization is SQL Server Profiler, which is the event detector in SQL2000. One is to use Profiler to capture data during business peaks in the production environment, one is to capture the SQL trace when a page is opened or an operation is executed without the interference of other users and programs in the test environment. The trace results are saved to the database, and then the query statements are used to find the SQL statements with large Reads and Duration, and the performance of these statements is optimized.

For deadlocks in the system, use the following command to open the deadlock tracking record. Once a deadlock occurs in the database, the deadlock information will be recorded in the database log.

Dbcc traceon (1204, 5,-1)

Dbcc tracestatus (-1)

DBCC TRACEON

Database optimization is a great learning. I will mainly optimize the database in the following days. I will record the specific optimization process as much as possible and hope it will be helpful to everyone.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.