Python implements simple multi-thread task queue and python multi-thread queue
Recently, when I used a gradient descent algorithm to plot neural network data, I encountered some algorithm performance problems. The code for the gradient descent algorithm is as follows (pseudo code ):
def gradient_descent(): # the gradient descent code plotly.write(X, Y)
In general, when the network requests the plot. ly plot, it will block and wait for the return, which will also affect the execution speed of other gradient descent functions.
One solution is to open a new thread every time the plotly. write function is called, but this method is not very good. I don't want to use a large and comprehensive task queue framework like cerely (a distributed task queue), because the framework is too heavy for me, in addition, I do not need redis to persist data for plotting.
So how can this problem be solved? I wrote a very small task queue in python, which can call the plotly. write function in a separate thread. The following is the program code.
from threading import Threadimport Queue import timeclass TaskQueue(Queue.Queue):
First, we inherit the Queue. Queue class. The get and put methods and Queue behaviors can be inherited from the Queue. Queue class.
def __init__(self, num_workers=1): Queue.Queue.__init__(self) self.num_workers = num_workers self.start_workers()
During initialization, we do not need to consider the number of worker threads.
def add_task(self, task, *args, **kwargs): args = args or () kwargs = kwargs or {} self.put((task, args, kwargs))
We store tasks, args, and kwargs in queues as tuples. * Args can pass parameters of varying quantities. ** kwargs can pass named parameters.
def start_workers(self): for i in range(self.num_workers): t = Thread(target=self.worker) t.daemon = True t.start()
We create a thread for each worker and delete it in the background.
The code for the worker function is as follows:
def worker(self): while True: tupl = self.get() item, args, kwargs = self.get() item(*args, **kwargs) self.task_done()
The worker function obtains the task at the top of the queue and runs the task according to the input parameters. In addition, there are no other functions. The following is the queue code:
We can test it through the following code:
def blokkah(*args, **kwargs): time.sleep(5) print “Blokkah mofo!”q = TaskQueue(num_workers=5)for item in range(1): q.add_task(blokkah)q.join() # wait for all the tasks to finish.print “All done!”
Blokkah is the name of the task we want to do. The queue is already cached in the memory and does not execute many tasks. The following steps run the queue as a separate process. When the main program exits and the database is persistent, the queue task will not stop running. However, this example shows how to write complex programs such as imaging work queues from a very simple small task.
def gradient_descent(): # the gradient descent code queue.add_task(plotly.write, x=X, y=Y)
After the modification, my Gradient Descent Algorithm seems to work more efficiently. If you are interested, refer to the following code.
from threading import Threadimport Queueimport timeclass TaskQueue(Queue.Queue):def __init__(self, num_workers=1):Queue.Queue.__init__(self)self.num_workers = num_workersself.start_workers()def add_task(self, task, *args, **kwargs):args = args or ()kwargs = kwargs or {}self.put((task, args, kwargs))def start_workers(self):for i in range(self.num_workers):t = Thread(target=self.worker)t.daemon = Truet.start()def worker(self):while True:tupl = self.get()item, args, kwargs = self.get()item(*args, **kwargs)self.task_done()def tests():def blokkah(*args, **kwargs):time.sleep(5)print "Blokkah mofo!"q = TaskQueue(num_workers=5)for item in range(10):q.add_task(blokkah)q.join() # block until all tasks are doneprint "All done!"if __name__ == "__main__":tests()
Articles you may be interested in:
- Python multi-thread programming (6): reentrant lock RLock
- Python multi-thread programming (5): deadlock Formation
- Python multi-thread programming (4): Use Lock mutex Lock
- Analysis on multithreading and program lock in Python
- Python multi-thread threading. Lock usage example
- Python thread learning example
- Comparison of Python multi-thread image capturing efficiency
- Python multi-thread, asynchronous, and multi-process crawler implementation code
- Using Queue and Condition for Thread Synchronization in Python
- A Brief Introduction to the creation of threads and the use of locks in Python Programming