Problem
Looking at data management and relational database management system rules, it is found that relational databases are the most appropriate way to use a reasonable level of concurrency to maintain data and to support data management behavior such as backup, mass cleanup, change data structures, and so on.
One problem is the difference in programming languages in traditional applications. The SQL (Structured Query Language) language is a declarative language, and in most companies it becomes a "data language" for describing "what I need" and "where to get". OOP (object-oriented programming) language has become the most widely used language for developers of R&D (research and development) companies around the world. So how do we make up for this gap?
Expert answers
These two trends make it necessary to bridge the gap by translating requests from object-oriented languages into SQL. In most cases, the DAL (data access layer) is used to describe the mechanism for managing all of these "data mosaic tasks" in a centralized manner.
Database Vendors (Microsoft, Oracle, IBM, and so on) offer numerous proprietary commands for their special love of SQL, and translations in the Dal need to support many options. The final result is that execution sometimes loses performance optimizations embedded in the engine. This allows many of these dal to be executed in a very straightforward manner, breaking the request into many small segments, each of which is translated into the corresponding SQL statement and establishes a "SELECT ..." that will symbolically do the work. From ... WHERE ... "conditional clause.
"Machine-written SQL statements" can sometimes be very long text statements. On 32-bit and 64-bit systems, the string length that contains the SQL statement is defined as 65,536 * network packet size. The default network packet size is 4096, so the SQL statement text is limited to 256MB.
I suspect that long text queries (far less than 256MB) will be a burden on the server's CPU. So I'm testing and publishing in this article. In this article, I will introduce the content:
Proof long text queries will consume your CPU.
Gives an understanding of the actual consumption expected on a medium sized server.
Dual-core CPU with 2GB RAM and 4 x 10,000 rpm disks.
Test table Features
To test, I'll create a table (called t1000) with 200,000 rows of records. This table has a number of different data types because I think it can reasonably represent a common table in the production environment. The features of this table include:
A separate integer field is used as the primary key (the default is the nest index).
A varchar field.
A char field that simulates additional 1KB data.
The five integer fields used to create long text queries in the WHERE Condition clause.