Swift is an interpreted language, can it really do that fast, or 210 times times just a special feature on the speed of comparison, arithmetic should not be promoted so much?
Reply content:
Contrary to the intuition of the questioner, arithmetic is exactly the slowest part of Python (as opposed to c). An integer a+b in a python VM takes about hundreds of lines of C code, including bytecode explanations, indirect calls from dynamic types, seamless support for bigint, and so on. To arithmetic on the top of the python, you just need to perform no more than the order of magnitude C. This is also a variety of JIT most like to take arithmetic for demonstration reasons.
In the actual UI code, the user logic uses the language, the performance actually does not matter. Because most of the user logic is nothing more than "change the state of the UI element BCD when user action A occurs", "extract field xyz from server return results and set to corresponding UI element", etc., glue code. On average, the framework code executes hundreds of rows per line of user code, as long as the step into framework code is understood in debugger. So as long as the framework is implemented in an efficient language, it doesn't matter if the user code is using Python, which is a few times slower. Benchmark this thing, if not announced:
1. Actual Test Cases
2. Specific implementations of the various languages
Whatever it is, Swift. Since it is compiled to machine code, it should not be compared to Python running on the official interpreter.
Python can also compile, with LLVM. Swift's comparison with such Python makes sense.
The name of the package is Numba, you add a line decorator above the function, you can JIT compile this function to machine code when running.
In the case of the main concern of the arithmetic, the example of addition, the use of Numba Python is faster than C.
What's so strange about that?! Mainstream programming system It's always hard to find a slower one than CPython. Do you get a compiler language and a scripting language? Washing and sleeping, of course, is a specific running point in a specific environment. PyPy run Fibonacci sequence is faster than CPython to know where to go, there are several on the line in use? Fair bet the fruit powder.
This time, I really despise. Swift compiles the machine code, and Python (which should be CPython) compiles the bytecode that the Python virtual machine runs.
So in the vast majority of scenarios, there is certainly a huge difference in operational efficiency. But I think 210 times times this value is the conclusion in a particular scene that can appear swift quickly.
In fact, the comparability of swift and Python is not so big, it should be compared with Golang is the real man is possible, recommend you understand the next LLVM this thing, should be compiled first as an intermediate expression, but then further compiled into pure machine code. In essence with C and OBJC compiled no difference, are compiled machine code, so efficiency problems can be forgotten.
And when you need instant results, just like the explanatory scripting language, switch two times to compile the output machine code for the output intermediate representation to the interpreter. LLVM really is a revolutionary thing in the picture of the English, meaning "RC4 encryption", which is a cipher-level pseudo-random number generation algorithm, the main performance overhead, is really arithmetic.
This shows how bad Python's arithmetic are.