Runtimeerror: Second simultaneous read on fileno 8 detected occurs when Python-memcached concurrently calls get/set.

Source: Internet
Author: User

The following error occurs: runtimeerror: Second simultaneous read on fileno 8 detected. unless you really know what you're re doing, make sure that only one greenthread can read any participant socket. consider using a pools. pool. if you do know what you're doing and want
Disable this error, call eventlet. Debug. hub_multiple_reader_prevention (false)

2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake   File "/usr/lib/python2.7/dist-packages/memcache.py", line 862, in get2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake     return self._get('get', key)2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake   File "/usr/lib/python2.7/dist-packages/memcache.py", line 846, in _get2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake     return _unsafe_get()2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake   File "/usr/lib/python2.7/dist-packages/memcache.py", line 830, in _unsafe_get2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake     rkey, flags, rlen, = self._expectvalue(server)2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake   File "/usr/lib/python2.7/dist-packages/memcache.py", line 955, in _expectvalue2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake     line = server.readline()2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake   File "/usr/lib/python2.7/dist-packages/memcache.py", line 1125, in readline2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake     data = recv(4096)2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake   File "/usr/local/lib/python2.7/dist-packages/eventlet/greenio.py", line 249, in recv2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake     timeout_exc=socket.timeout("timed out"))2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake   File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/__init__.py", line 117, in trampoline2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake     listener = hub.add(hub.READ, fileno, current.switch)2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake   File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 27, in add2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake     listener = super(Hub, self).add(evtype, fileno, cb)2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake   File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 126, in add2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake     evtype, fileno, evtype))2013-03-08 18:27:27 17870 TRACE glance.store.chunkcache.fake RuntimeError: Second simultaneous read on fileno 8 detected.  Unless you really know what you're doing, make sure that only one greenthread can read any particular socket.  Consider using a pools.Pool. If you do know what you're doing and want to disable this error, call eventlet.debug.hub_multiple_reader_prevention(False)

In addition to this, a bunch of memcache. Set returns 0.

The general model when a problem occurs is as follows:

class CacheBackend(CacheBackendBase):    '''    Cache in memcached servers.    '''    def __init__(self):        value_length = CONF.chunkcache_memcached_value_length        servers = CONF.chunkcache_memcached_servers        self.memcache = memcache.Client(servers, debug=1,                                        server_max_value_length=value_length)    @staticmethod    def instance():        global CACHE_BACKEND        if CACHE_BACKEND is None:            CACHE_BACKEND = CacheBackend()        return CACHE_BACKEND    def get(self, key, default=None):        result = self.memcache.get(str(key))        return result or default    def exist(self, key):        return self.get(str(key)) != None    def set(self, key, value):        result = self.memcache.set(str(key), value)        if result == 0:            raise exception.MemcacheSetError()        return result    def delete(self, key):        if self.exist(key):            result = self.memcache.delete(str(key))            if result == 0:                raise exception.MemcacheDeleteError()        return True    def clear(self):        # No need for clearing memcached.        return True

After communication with colleagues, it may be because fork has a socket connection or multiple processes have reused a socket, but it doesn't make sense. I really don't know much about it, record the solution first. Who knows how to explain it ~

The following figure shows the solution model. The general change is to create a memcache client at each set, get, and delete,

Can I create a new socket? However, the memcache module also seems to have a socket cache. It will get the same socket in the cache,

The following disconnect_all addition and not addition are the same, so no problem will occur, and adding disconnect_all in the above module will not work again,

It is hard to understand, but the problem has been solved, and the above problems have not been reproduced in multiple tests.

class CacheBackend(CacheBackendBase):    '''    Cache in memcached servers.    '''    def __init__(self):        self.value_length = CONF.chunkcache_memcached_value_length        self.servers = CONF.chunkcache_memcached_servers    @staticmethod    def instance():        global CACHE_BACKEND        if CACHE_BACKEND is None:            CACHE_BACKEND = CacheBackend()        return CACHE_BACKEND    def get(self, key, default=None):        client = _connect(self.servers, self.value_length)        result = client.get(str(key))        client.disconnect_all()        return result or default    def exist(self, key):        return self.get(str(key)) != None    def set(self, key, value):        client = _connect(self.servers, self.value_length)        result = client.set(str(key), value)        client.disconnect_all()        if result == 0:            raise exception.MemcacheSetError()        return result    def delete(self, key):        client = _connect(self.servers, self.value_length)        if self.exist(key):            result = client.delete(str(key))            if result == 0:                raise exception.MemcacheDeleteError()        client.disconnect_all()        return True    def clear(self):        # No need for clearing memcached.        return Truedef _connect(servers, value_length=chunkstore.Store.CHUNKSIZE):        return memcache.Client(servers, debug=1,                               server_max_value_length=value_length)

In addition, I want to explain that it is not caused by a singleton or maintaining a client object. The problem is the same, and the concurrency is multi-process concurrency, there should be no Singleton or other issues.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.