python rpc framework —- callme/multiprocessing.managers

來源:互聯網
上載者:User

一、選擇python RPC framework

QAM http://packages.python.org/qam/introduction.html

    基於carrot訊息架構(AMQP協議)

http://ask.github.com/carrot/introduction.html

QAM目前已經不再被積極維護了,它的替代品是callme,carrot也被kombu取代

callme http://pypi.python.org/pypi/callme

veasy_intall callme #今天運氣不好,經常出現502網關錯誤,所以經常要手動下載安裝

PS: 對於callme架構,rpc server如果使用多線程模式的話並發會比較好,但是需要考慮多個rpc調用並發訪問同一個資源的問題

[xudongsong@vh212 tmp]$ cat callme_server.py import callme, timeg_count = 0def count():        global g_count        g_count += 1        return g_countserver = callme.Server(server_id='fooserver_1', amqp_host ='localhost', threaded = True)server.register_function(count, 'count')server.start()[xudongsong@vh212 tmp]$ cat callme_client.py import callmeimport loggingimport threadinglogging.basicConfig(level=logging.INFO, format="%(threadName)s %(asctime)s %(levelname)s [%(filename)s:%(lineno)d]%(message)s")def thread_body():        proxy = callme.Proxy(amqp_host ='localhost')        while True:                logging.info(proxy.use_server('fooserver_1').count())if __name__ == '__main__':        threadList = list()        for i in range(10):                th = threading.Thread(target = thread_body)                th.daemon = True                th.start()                threadList.append(th)        for th in threadList:                th.join()

經測試,rpc server設定threaded = True的時候rpc client會收到重複的一些資料; rpc server設定threaded = False的時候不會有這個問題。但是server端不使用多線程的話並發會比較差,所以可以如下改進:

[xudongsong@vh212 tmp]$ cat callme_server.py                                                                                                          import callme, time, threadinglock = threading.Lock()g_count = 0def count():        lock.acquire()        global g_count        g_count += 1        lock.release()        return g_countserver = callme.Server(server_id='fooserver_1', amqp_host ='localhost', threaded = True)server.register_function(count, 'count')server.start()

二、RabbitMQ

而這一切的一切都需要AMQP服務端的存在,我選擇是的RabbitMQ

RabbitMQ    官網 http://www.rabbitmq.com/

                      wiki http://en.wikipedia.org/wiki/RabbitMQ

                      (Erlang真是很強大呢:The RabbitMQ server is written in Erlang and is built on the Open Telecom Platform framework for clustering and failover)

            

RabbitMQ安裝過程:

    yum install rabbitmq-server(用yum list rabbitmq-server可以看到我安裝的版本是2.6.1)

    sudo /etc/init.d/rabbitmq-server start

    或者這樣啟動:rabbitmq-server -detached

    rabbitmqctl --help

    rabbitmqctl status

    沒有外掛程式管理工具rabbitmq-plugins?http://stackoverflow.com/questions/8548983/how-to-install-rabbitmq-management-plugin-rabbitmq-plugins

版本太低,外掛程式都要手動去官網下載,太不方便了,改用官網釋放的最新版本吧

    http://www.rabbitmq.com/install-generic-unix.html 下載解壓縮即可

    整體配置:cat /home/dongsong/rabbitmq_server-3.0.0/sbin/rabbitmq-defaults

        環境配置是CONF_ENV_FILE所指示的檔案(http://www.rabbitmq.com/configure.html#customise-general-unix-environment)

        組件配置是CONFIG_FILE所指示的檔案(http://www.rabbitmq.com/configure.html#configuration-file)

    啟動:[dongsong@localhost sbin]$ /home/dongsong/rabbitmq_server-3.0.0/sbin/rabbitmq-server -detached

                  ./rabbitmq-server: line 85: erl: command not found
                  安裝Erlang:http://www.erlang.org/download.html 下載 解壓 ./configure --prefix=xx; make; make install

    停止:/home/dongsong/rabbitmq_server-3.0.0/sbin/rabbitmqctl stop

    狀態:/home/dongsong/rabbitmq_server-3.0.0/sbin/rabbitmqctl status

    啟用管理外掛程式:    /home/dongsong/rabbitmq_server-3.0.0/sbin/rabbitmq-plugins enable rabbitmq_management

                                    http://server-name:15672/

三、昨天(2013.1.29)看multiprocessing模組發現managers用來做rpc架構更方便

[dongsong@localhost python_study]$ cat process_model_v2.py #encoding=utf-8import multiprocessing, time, Queue, sys, randomfrom multiprocessing import Processfrom multiprocessing.managers import BaseManagerHOST = '127.0.0.1'PORT = 50000AUTH_KEY = 'a secret'class QueueManager(BaseManager): passclass QueueProc(Process):        def __init__(self):                self.queueObj = Queue.Queue()                super(QueueProc, self).__init__()        def run(self):                QueueManager.register('get_queue', callable = lambda:self.queueObj)                manager = QueueManager(address = (HOST,PORT), authkey = AUTH_KEY)                server = manager.get_server()                print '%s(%s) started....' % (self.name, self.pid)                server.serve_forever()                print '%s(%s) exit' % (self.name, self.pid)class Worker(Process):        def run(self):                QueueManager.register('get_queue')                manager = QueueManager(address = (HOST,PORT), authkey = AUTH_KEY)                manager.connect()                self.queueObj = manager.get_queue()                while True:                        task = self.queueObj.get()                        print '%s(%s) get task "%s", %s left in queue' % (self.name, self.pid, task, self.queueObj.qsize())                        time.sleep(random.randrange(5))class Scheduler(Process):        def run(self):                QueueManager.register('get_queue')                manager = QueueManager(address = (HOST,PORT), authkey = AUTH_KEY)                manager.connect()                self.queueObj = manager.get_queue()                taskId = 0                while True:                        task = 'task-%d' % taskId                        taskId += 1                        self.queueObj.put(task)                        print '%s(%s) put task "%s", %s left in queue' % (self.name, self.pid, task, self.queueObj.qsize())                        time.sleep(random.randrange(5))if __name__ == '__main__':        queueProc = QueueProc()        print 'queueProc deamon = %s; is_alive() = %s' % (queueProc.daemon, queueProc.is_alive())        queueProc.daemon = True #父進程退出時queueProc被terminate掉,queueProc不允許建立子進程        queueProc.start()        print 'queueProc deamon = %s; is_alive() = %s' % (queueProc.daemon, queueProc.is_alive())        while not queueProc.is_alive():                print 'queueProc.is_alive() = %s' % queueProc.is_alive()        scheduler = Scheduler()        scheduler.daemon = True        scheduler.start()        workerList = [Worker() for i in range(1)]        for worker in workerList:                worker.daemon = True                worker.start()        currentProc = multiprocessing.current_process()        print '%s(%s) is the master...' % (currentProc.name, currentProc.pid)        queueProc.join()        scheduler.join()        worker.join()[dongsong@localhost python_study]$ vpython process_model_v2.py queueProc deamon = False; is_alive() = FalsequeueProc deamon = True; is_alive() = TrueMainProcess(3341) is the master...QueueProc-1(3342) started....Scheduler-2(3343) put task "task-0", 1 left in queueWorker-3(3344) get task "task-0", 1 left in queueScheduler-2(3343) put task "task-1", 1 left in queueScheduler-2(3343) put task "task-2", 2 left in queueWorker-3(3344) get task "task-1", 2 left in queueScheduler-2(3343) put task "task-3", 2 left in queue

監控進程消耗的記憶體

[dongsong@localhost python_study]$ cat monitor_memory.py #encoding=utf-8import multiprocessing, timeimport os_proc_status = '/proc/%d/status' % os.getpid()_scale = {'kB': 1024.0, 'mB': 1024.0*1024.0,          'KB': 1024.0, 'MB': 1024.0*1024.0}def _VmB(VmKey):    '''Private.    '''    global _proc_status, _scale     # get pseudo file  /proc/<pid>/status    try:        t = open(_proc_status)        v = t.read()        t.close()    except:        return 0.0  # non-Linux?     # get VmKey line e.g. 'VmRSS:  9999  kB\n ...'    i = v.index(VmKey)    v = v[i:].split(None, 3)  # whitespace    if len(v) < 3:        return 0.0  # invalid format?     # convert Vm value to bytes    return float(v[1]) * _scale[v[2]]def memory(since=0.0):    '''Return memory usage in bytes.    '''    return _VmB('VmSize:') - sincedef resident(since=0.0):    '''Return resident memory usage in bytes.    '''    return _VmB('VmRSS:') - sincedef stacksize(since=0.0):    '''Return stack size in bytes.    '''    return _VmB('VmStk:') - sincedef FetchMemSize(pid = None):        if pid == None:                proc = multiprocessing.current_process()                pid = proc.pid        print 'current process pid is %s' % pid        statusInfos = file('/proc/%s/status' % pid,'r').read()        indexNum = statusInfos.index('VmRSS:')        print '\t'.join(statusInfos[indexNum:].split(None, 3)[0:3])if __name__ == '__main__':        d = dict()        dIndex = 0        while True:                d[dIndex] = 'hello'*10000                FetchMemSize()                print resident()                time.sleep(3)                dIndex += 1[dongsong@localhost python_study]$ vpython monitor_memory.py current process pid is 3667VmRSS:  4800    kB4919296.0current process pid is 3667VmRSS:  4876    kB4993024.0current process pid is 3667VmRSS:  4924    kB5042176.0current process pid is 3667VmRSS:  4972    kB5091328.0

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.