Python cannot achieve true multicore parallelism because of its own nature, but there are some third-party libraries that better simulate parallel operations in multi-core environments, such as PP packets and multiprocessing, so which can make the most of multi-core?
Here I make a simple contrast, first put the conclusion: multiprocessing is the best.
In the actual measurement process, my CPU is 4 core 8 threads, multiprocessing can take full advantage of the multi-core computing advantages, so that each CPU core load is basically equivalent, and can be loaded on 8 process cores, the overall CPU performance, and PP bag can only use 4 physical core, Basic disregard for the remaining 4 virtual cores.
The following sample code is for everyone to run the reference:
Import Multiprocessingimport Timeimport ppdef func (n): sum = 0 for i in Xrange (n): sum + = I return sumif _ _name__ = = "__main__": multiprocessing.freeze_support () start = Time.clock () for I in Xrange (200000): sum = func (10000) print ">> normal:", Time.clock ()-start start = Time.clock () pool = Multiprocessing. Pool (processes=8) jobs = [] for I in Xrange (200000): jobs.append (Pool.apply_async (func, (10000,))) pool.close () pool.join () print ">> multiprocessing:", Time.clock ()-Start start = Time.clock () jobs = [] job_server = pp. Server () Job_server.set_ncpus (8) for I in Xrange (200000): jobs.append (Job_server.submit (func, ( 10000,)) job_server.wait () print ">> pp:", Time.clock ()-Start job_server.print_stats ()
Python Multithreading common Package comparison