In Python numpy, if I generate a numpy array with a randomly generated list of 10^6 lengths, the build takes 0.1s, but the mean of this array requires only 2% of the Init's time. And my own implement array to get mean takes more than 10 seconds.
So the numpy array is very dark technology should be:
1) using the underlying code too bad?
2) when Init partially compute a certain amount of intermediate? (should be mean time is slower than access, faster than O (n))
If it's 2, can you talk about the idea (you don't need to use Python O (n) to get mean)?
Gratitude can't help!
Reply content:
Many of NumPy's functions are not only implemented in C, but also using Blas (typically Windows link to Mkl, Linux link to openblas). Basically those BLAS implementations are highly optimized for each operation, such as using the AVX vector instruction set, even faster than your own C implementation, not to mention the ratio of Python implementations. You use Blas to try numpy the bottom layer uses Blas to do vector, matrix operations. Like the average vector operation, it's easy to use multi-threading or vectorization to accelerate. For example, MKL has a lot of optimizations.
a=[];s=0;N=1000000 from Time Import* from Math Import* from Random Import*St=Clock() for I inch Range(N):a.Append(Random()) for I inch a:s=s+Iet=Clock()Print "mean=",s/N,"Time=",et-St,"Seconds"