Laravel5.2 queue-driven expire parameter setting brings a recurring problem to the database driver

Source: Internet
Author: User
Tags exception handling set time throw exception

   ' Connections '  => [    ....          ' database '  => [              ' driver '  =>  ' database ',              ' table '  =>  ' Jobs ',             ' Queue '  =>  ' default ',             ' Expire '  => 60,        ],          ' Redis '  => [              ' driver '  =>  ' Redis ',              ' connection '  =>  ' default ',              ' queue '  =>  'Default ',             ' expire '  => 180,         ],    ....    ],

Laravel5.2 Queue Driver config/queue.php configuration file, "database" and "Redis" have a expire parameter, the manual is interpreted as "queue task expires in seconds", the default is 60 seconds.

(Note: 5.2 And after the configuration file changed, changed to ' retry_after ' parameter, see Manual)


Search on the Internet for this configuration, not much explanation, but the actual use of the process, found for the execution time more than expire set time of the queue process, and the use of queues for Distributed program deployment, this parameter and this design pattern is a big pit ...


The problem was found to be using a distributed program to deploy the processing queue, two servers deploying the Laravel Framework artisan script, connecting to a MySQL database, and using a jobs queue table.

After deployment, the two server scripts are started, the script executed after discovery, the data in the queue drive, such as the MySQL jobs table, encountered the first execution of the script queue data will not be skipped, but instead of this data as failed, store a new data to Failed_ The Jobs table (laravel queue data is stored in the Failed_jobs table if it fails), causing data duplication.

Before a server started 3 process execution scripts, and this error does not occur, after the execution of the script will not get the queue data of the previous process, and will not be judged as failed, multi-service processing when the cause of the queue driver data errors?


Depending on the process that the queue executes, when the program executes, the queue-to-queue driver takes the task, and the process queue driver that obtains the task should do things, so that the second process fetch task skips the queue data that is being executed.


Check some information, understand the principle of the Laravel queue, and finally have to look at the source of the queue.

The source of the Laravel queue is in the Illuminate\queue directory.


The MySQL-driven jobs table is analyzed first:

CREATE TABLE ' Jobs ' (' ID ' bigint () unsigned not NULL auto_increment, ' queue ' varchar (255) COLLATE Utf8_unicode_ci not NULL, ' payload ' longtext COLLATE utf8_unicode_ci not NULL, ' attempts ' tinyint (3) unsigned not NULL, ' reserved ' tinyint (3) unsigned not NULL, ' reserved_at ' int (ten) unsigned DEFAULT NULL, ' available_at ' int (ten) unsigned NOT NULL, ' Created_ at ' int ' unsigned not NULL, PRIMARY key (' ID '), key ' Jobs_queue_reserved_reserved_at_index ' (' Queue ', ' reserved ', ' Rese Rved_at ') Engine=innodb DEFAULT Charset=utf8 collate=utf8_unicode_ci;

The manual describes the saving of the queue task, the payload field stores the serialized task, the Laravel queue can serialize the data model, execution time the queue system will automatically get the entire model instance from the database, specific instructions see manual.

However, several other state and Time fields are the key fields that guarantee the processing of the queue things.

"Attempts" execution times, "reserved" execution status, "Reserved_at" Execution time, ' available_at ' booking execution time, ' Created_at ' is the queue creation time.


The script to listen for events has listener.php and worker.php two scripts, see the source code description Listener can handle the specified queue, connection parameters, but in fact, the end of work to deal with the queue. Laravel5.4 has canceled the Queue:listen parameter and is executed with queue:work. But I said here is the Laravel5.2 problem, do not know whether the following reasons make Laravel optimization removed listen.


Continue to analyze the source of the worker class processed by the queue, use the Pop method when fetching the queue data, and this method will invoke the driver's Pop method based on the type of drive passed, such as database or Redis.

     $connection  =  $this->manager->connection ($connectionName);      $job  =  $this->getnextjob ($connection,  $queue);     // if  we ' re able to pull a job off of the stack, we  Will process it and    // then immediately return back  out. if there is no job on the queue    //  we will  "Sleep"  the worker for the specified number of  seconds.    if  (! is_null ($job))  {         return  $this->process (              $this->manager->getname ($connectionName),  $job,  $maxTries,  $delay          );     } 


Here is the pop method for databasequeue.php.

    /**     * Pop the next job off  of the queue.     *     *  @param    string   $queue      *  @return  \illuminate\contracts\queue\job|null      */    public function pop ($queue  = null)     {         $queue  =  $this->getqueue ($queue);         $this->database->begintransaction ();         if  ($job  =  $this->getnextavailablejob ($queue)  {             $job  =  $this Markjobasreserved ($job);             $this, Database->commit ();            return new databasejob (                  $this->container,   $this,  $job,  $queue             );         }         $this Database->commit ();     }

The process of fetching data has been opened.

The core of the fetch queue data is $this->getnextavailablejob ($queue).

Open the SQL log to see how the queue data is queried.

    /**     * get the next available job  for the queue.     *     *  @param    string|null   $queue      *  @return  \StdClass|null      */    protected function getnextavailablejob ($queue)     {         $this->database->enablequerylog ();         $job  =  $this->database->table ($this Table)                      ->lockforupdate ()                      ->where (' queue ',  $this->getqueue ($queue))       &nbsP;              ->where (function   ($query)  {                          $this->isavailable ($query);                           $this->isreservedbutexpired ($query);                     })                      ->orderby (' id ',  ' ASC ')                      ->first ();         var_dump ($this->database- >getquerylog ());         return  $job  ?  (object)   $job  : null;     }
Array (1)  {  [0] =>  array (3)  {     ' query '  =>     string (165)   "select * from  ' Jobs '  where  ' queue '  =  ? and  (' Reserved '  = ? and  ' available_at '  <= ?)  or  (' Reserved '  = ? and  ' reserved_at '  <= ?))  order by  ' id '  asc limit 1 for update '      ' Bindings '  =>    array (5)  {      [0] =>       string (7)   "Default"       [1] =>       int (0)       [2] =>       int (1493634233)       [3] =>       int (1)       [4]&nbSp;=>      int (1493634173)     }     ' Time '  =>    double (1.55)   }

As you can see from the SQL statement, there are two conditions for fetching the queue data

Reserved is 0 o'clock, available_at time is less than the current time, this condition is the queue to be executed, reserved is 1 o'clock, reserved_at execution start time is less than the calculated time ($this isreservedbutexpired), which is the current time minus timeout seconds Carbon::now ()->subseconds ($this->expire)->gettimestamp (), This condition is to determine whether the queue task expires.

The entire select process is "for update" with an exclusive lock.


After a qualifying queue is obtained

    /**     * mark the given job id  as reserved.     *     *  @param  \ stdclass  $job      *  @return  \stdclass     */     protected function markjobasreserved ($job)     {          $job->reserved = 1;          $job->attempts =  $job->attempts + 1;          $job->reserved_at =  $this->gettime ();          $this->database->table ($this->table)->where (' id ',  $job->id)->update ([              ' reserved '  =>  $job->reserved,              ' reserved_at '  =>  $job->reserved_at,              ' attempts '  =>  $job->attempts,         ]);        return  $job;     }

The program updates the data and commits when the update is complete.


The same server, the second process to fetch data when the pessimistic lock, need to wait for the first process to take data update reserved and time after execution. in other words, when the Laravel queue uses database, the concurrent process does not take multiple data at the same time, but takes the same data to wait for one of the process Update data state and time-out, and the first operation after the queue obtains the data is updated. So the second process does not take the same data for the first process, unless the queue expires.

In the Databasequeue.php pop method, after acquiring the queue data, "$this->database->commit (); "Before Sleep (10), it would be obvious to see that the second queue did not get additional queue data, stating that" for Update "is only an update-level exclusive lock and does not repel select.

Laravel using the database queue can sometimes have blocking behavior, not knowing if this is the cause.


If the execution time is too long, more than the ' expire ' parameter setting time, the second queue will take the first queue data, determine the timeout, this time will be based on the maximum number of executions tries to determine whether to insert new queue data continue to try to execute, or insert into the error queue "Failed_jobs" The table determines that the queue execution failed.


The above is the logic of laravel using MySQL execution queue, the two server deployment Laravel framework executed artisan script, the problem of a jobs table queue failed is the reason that the server time is inconsistent, The latter server executes when the previous queue data is determined to be super-time inserted into the "Failed_jobs" a new data, has reached the maximum number of failures, or else will also insert new data to continue the attempt.


So queue:listen the execution time parameter --timeout=60, be sure to set less than the queue task expiration time expire parameters!

Also, Laravel5.2 's queue:work did not --timeout=60 this parameter .....


The last is the processing logic that the queue finishes executing.

If the queue execution succeeds deleting jobs ' data, this is fine. Failure, including timeouts, exceptions, and so on, will determine whether to insert a new data based on the maximum number of failures set, or insert a failed data into the "failed_jobs" table.

When an error occurs, Handlejobexception's exception handling calls Databasequeue.php's release method, $job->release ($delay), and ultimately pushtodatabase implementation.

When inserting new data, attempts is the number of failures, reserved is 0,available_at for the current timestamp plus time-out configuration,

This way, the entire queue process forms a complete data operation.


Laravel5.4 a lot of changes to the queue function, tips in the manual

Task Expiration and timeouts

Task Execution Time

In the configuration file config/queue.php , each connection defines retry_after an item. The purpose of this configuration item is to define how many seconds after execution the task is released back to the queue. If retry_after the set value is 90 , the task is 90 not completed after running seconds, then it will be released back to the queue instead of being deleted. There is no doubt that you need to retry_after set the value to the maximum possible value of the task execution time.


Laravel5.4 removed the queue's listen command, and the work also added a timeout parameter. The Laravel5.5 should be upgraded directly when it comes out.


Appendix: Laravel5.2 test Script, before the online search is relatively early, or the job as a command of the way, in fact, after 5.2 job use is very simple.

Jobs under job definition, handle can add some test scenarios, such as my throw exception, directly failed

class myjob extends job implements shouldqueue{    use  interactswithqueue, serializesmodels;    private  $key;     private  $value;     /**     * create a new  job instance.     *     *  @return  void      */    public function __construct ($key,  $ Value)     {         $this->key =  $key;          $this->value =  $value;     }     /**     * Execute the job.      *     *  @return  void     */     public functIon handle ()     {        for ($i =0; $i <20; $i + +) {            echo  "$i \ n";             sleep (1);         }        echo  "Sss\t". $this->key. " \ t ". Date (" Y-m-d h:i:s ")." \ n ";         throw new \exception (" Test \ n ");//          redis::hset (' queue.test ',  $this->key,  $this->value);     }    public function failed ()     {         dump (' failed ');     }}

Controller Access settings task queue, key and value were inserted using Test Redis before the job parameters can be set according to your own test scenario.

for ($i = 0; $i < 5; $i + +) {echo "$i";            $job = (new MyJob ($i, $i))->delay (20);        $this->dispatch ($job); }

My example sets up 5 queues, open multiple shells and execute the artisan test in parallel.


I was going to read the Redis queue code and send it together, and the Redis code didn't look much more recently.

Redis driver can refer to http://www.cnblogs.com/z1298703836/p/5346728.html This article describes the Laravel queue Redis driver logic in detail, The list and Zset fabric storage queues used by Redis drivers, the execution process removes the dump queue, there is no "for update" operation for the database, so there should be no queue blocking situation.

But the queue task expiration time setting is the same as the database driver, so the same

Queue:listen the execution time parameter --timeout=60, be sure to set less than the queue task Expiration time expire parameter!


Finally finished ...

Laravel5.2 queue-driven expire parameter setting brings a recurring problem to the database driver

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.