Test whether a single-process python program can use multi-core cpu

Source: Internet
Author: User
Tags in python

When c ++ is used for programming, if multithreading is used, it is true that the cpu of the process exceeds 100%, but for python, it seems that this conflicts with online articles.

So I decided to test it myself and write the code as follows:

From thread import start_new_thread

Def worker ():
While 1:
# Print 1
Pass

For it in range (0, 15 ):
Start_new_thread (worker ,())


Raw_input ()

The running environment is centos6.4 64-bit, and python 2.7.

The result is as follows:

 

 

We can clearly see that the cpu of python processes with a pid of 31199 reaches 787.9%, which is close to the theoretical maximum of 800%.

The top eight CPUs also reached nearly 100% utilization.

Based on the above test results, we can conclude that using a single process in python, multithreading can indeed be used on a multi-core cpu, rather than uploading through the network.

However, I still hope that readers can criticize and correct this article if they have further research. Thank you ~

Thanks to the discussion by several bloggers such as la. onger, we have now added a test to test the difference in the total time for pure cpu computing to be completed using one thread or multiple threads. The code is as follows:

Import time
From threading import Thread

LOOPS = 1, 1000000
THREAD_NUM = 10
STEP_SIZE = 94753434


Class Test (object ):
Num = 1

Def work (self ):
For it in xrange (0, LOOPS ):
If self. num & gt; STEP_SIZE:
Self. num-= STEP_SIZE
Else:
Self. num + = STEP_SIZE


Def one_thread_test (self ):
Self. num = 1

Begin_time = time. time ()

For v in xrange (0, THREAD_NUM ):
Self. work ()
           
Print 'time passed: ', time. time ()-begin_time


Def multi_thread_test (self ):
Self. num = 1

T_list = []

Begin_time = time. time ()

For v in xrange (0, THREAD_NUM ):
T = Thread (target = self. work)
T. start ()
T_list.append (t)

For it in t_list:
It. join ()
           
Print 'time passed: ', time. time ()-begin_time


T = Test ()
T. one_thread_test ()
T. multi_thread_test ()

The input result is as follows:

Time passed: 3.44264101982
Time passed: 7.22910785675

 

Multithreading is slower than multithreading.

To compare with the c ++ version, the c ++ code is also developed as follows:

# Include <stdio. h>
# Include <string. h>
# Include <stdint. h>
# Include <iostream>
# Include <memory>
# Include <sstream>
# Include <algorithm>
# Include <string>
# Include <vector>
# Include <set>
# Include <map>
# Include <sys/time. h>
# Include <pthread. h>
Using namespace std;

# Define LOOPS 1000000
# Define THREAD_NUM 10
# Define STEP_SIZE 94753434

Class Test
{
Public:
Test (){}
Virtual ~ Test (){}

Void one_thread_test (){
This-> num = 1;

Gettimeofday (& m_tpstart, NULL );
For (size_t I = 0; I <THREAD_NUM; ++ I)
        {
Work ();
        }

Gettimeofday (& m_tpend, NULL );

Long timeuse = 1000000 * (long) (m_tpend. TV _sec-m_tpstart. TV _sec) + m_tpend. TV _usec-m_tpstart. TV _usec; // microsecond

Printf ("time passed: % fn", (double) timeuse)/1000000 );
    }

Void multi_thread_test (){
This-> num = 1;
Int ret;

Vector <pthread_t> vecThreadId; // id of all threads

Pthread_attr_t attr;
Pthread_attr_init (& attr );
Pthread_attr_setdetachstate (& attr, PTHREAD_CREATE_DETACHED );

Gettimeofday (& m_tpstart, NULL );

Pthread_t threadId;
For (int I = 0; I <THREAD_NUM; I ++)
        {
Ret = pthread_create (& threadId, & attr, Test: static_run_work, (void *) this );
If (ret! = 0 ){
Pthread_attr_destroy (& attr );
            }
VecThreadId. push_back (threadId );
        }
Pthread_attr_destroy (& attr );
For (vector <pthread_t>: iterator it = vecThreadId. begin (); it! = VecThreadId. end (); ++ it)
        {
Pthread_join (* it, NULL );
        }

Gettimeofday (& m_tpend, NULL );


Long timeuse = 1000000 * (long) (m_tpend. TV _sec-m_tpstart. TV _sec) + m_tpend. TV _usec-m_tpstart. TV _usec; // microsecond

Printf ("time passed: % fn", (double) timeuse)/1000000 );
    }

Void work (){
For (size_t I = 0; I <LOOPS; ++ I ){
If (this-> num> STEP_SIZE ){
This-> num-= STEP_SIZE;
            }
Else {
This-> num + = STEP_SIZE;
            }
        }
    }

Static void * static_run_work (void * args ){
Test * t = (Test *) args;
T-> work ();

Return NULL;
    }
   
Public:
Int64_t num;
Struct timeval m_tpstart, m_tpend;
};

Int main (int argc, char ** argv)
{
Test test;

Test. one_thread_test ();
Test. multi_thread_test ();
Return 0;
}

The output result is as follows:

Time passed: 0.036114
Time passed: 0.000513

It can be seen that the performance of c ++ has improved a lot.

It can be seen that the use of multi-core cpu is indeed worse in the multi-thread programming of python.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.