In Assembly, we all know that DIV division commands require that the divisor be stored in ax, dx, and ax by default, and the number of divisor digits must be twice the divisor. For example, in addition to 8 digits, the divisor is 16 digits. Why must the number of digits be two times the number of divisor? There is an explanation: Because the CPU only performs addition operations, put everything elseAlgorithmConvert all to addition. For example, convert the divisor to addition first. For example, 36/6 when the CPU sees this operation, it will think like this: How many 6 is required to be added to get 36, then the CPU starts from 1 to 6 and 2 to 6 ........... after such a calculation, we finally found that the number was six or six. From this we can see that the CPU is constantly summed up by the divisor and the result is known, this leads to a problem. If the divisor is not twice the divisor number, the maximum value that can be expressed by the divisor number may be exceeded, if we can ensure that the divisor is twice the divisor, this problem can be solved. If you have any questions, you can use your own mathematical knowledge to prove it!