The SQL92 standard defines several basic data types, which are the basis for various data types in the SQL database. In the article "string data type", we have discussed in detail the string data types defined by the SQL92 standard. Now let's discuss the numeric data types further.
You'd better start experimenting with different database implementations and passing data to them, which can deepen your understanding of numeric data types. This article will give you a summary of numeric data types that you can combine with the documentation of your database.
In the four data types of string, numeric, DateTime, and interval, the type of numeric is the most, and the constraint is the most. When exchanging data between different database implementation methods, the precision of numerical type is also easiest to be reduced. The implementation differences between Oracle and SQL Servers (the same length of data type) cause the data transfer between them to truncate the numbers and change their values. Therefore, before porting the program, you have to have a clear understanding of the data definition differences between the two platforms, as well as the risk of compromising data accuracy.
With these warnings in mind, let's look at the numeric types of the SQL92 standard
Basic numeric type
The types associated with numeric values are collectively known as numeric types. All values are precision, and precision refers to the number of significant digits. Some values also have a scale value (scale value), which indicates the minimum number of significant digits to the right of the decimal point. For example, the precision of the number 1234.56 is 6, and the scale value is 2, which can be defined as numeric (6,2).
Each database implementation method has rules about how to approximate numeric values or truncate values. In addition to providing the attributes required to obtain numerical length and other numerical processing, SQL92 provides built-in functions such as addition, subtraction, multiplication, and so on. All numeric types can be compared to each other and assigned values to each other. Although the implementation methods are different, they have one thing in common: their results generally retain the maximum precision.
NUMERIC
Usage: NUMERIC (precision, scale value)
is an exact numeric type, that is, a literal representation of the value of a number. (The number can be selected or intercepted to match the specified precision, and the scale value is determined by predefined rules.) )
In order to conform to the decimal digit number specified in the scale value, the redundant fractional part is taken away, and the rounding process is decimal.
The total length of the number equals the precision, and if the scale value is greater than 0 (with a decimal part), the length plus 1.
The number of decimal parts to conform to the scale value.
DECIMAL | DEC
Usage: DECIMAL (precision, scale value) | DEC (precision, scale value)
is an exact numeric type.
In decimal.
The total length of the number equals the precision, and if the scale value is greater than 0 (with a decimal part), the length plus 1.
The number of decimal places must not be less than the scale value, and the upper limit of the number of decimal places is set by the database provider.
INTEGER | Int
Usage: INTEGER (precision)
is an exact numeric type.
Using binary or decimal, this is based on the number of bits (bit) representing the number (this is implementation-specific, corresponding to smallint).
The scale value constant is 0.
The database vendor defines the maximum precision and the minimum precision.
The default precision that the vendor may provide.