originally is to see, chunk whether support where and other conditions to filter, also looked at the next source, just searched an article, talk about this "update" problem. Reproduced under, record, to avoid everyone jumping pits. Original link: http://www.jianshu.com/p/5dafd0d6e69a from Jane Book in some cases, we need to operate on large quantities of data, and if we use foreach at this time, we are likely to experience an operation timeout.
In the Laravel framework we can easily use the chunk method to solve.
Let's look at a simple example:
$users = User::all ();
foreach ($users as $user) {
$some _value = ($user->some_field > 0)? 1:0;
Some other logic
$user->update ([' some_other_field ' = ' = $some _value]);
}
There's nothing wrong with this code, but when the amount of data is large, the situation is less optimistic, on the one hand, the run time, and the memory consumption when the data is stored.
In Laravel, the application of chunk data chunking method, can handle this problem very well, the following is the application of chunk code:
User::chunk (0, function ($users) {
foreach ($users as $user) {
$some _value = ($user->some_field >)? 1: 0;
Might is more in logic here
$user->update ([' Some_other_field ' = ' + $some _value]);
}
);
This code is to perform an update of 100 pieces of data when the execution is complete and then continue with the other 100 data ...
This means that every time he operates a block of data rather than the entire database.
User::chunk (0, function ($users) {
foreach ($users as $user) {
$some _value = ($user->some_field >)? 1: 0;
Might is more in logic here
$user->update ([' Some_other_field ' = ' + $some _value]);
}
);
It is important to note that when using chunk with filtered conditions, if it is self-updating, you will miss out some data and then look at the code:
User::where (' approved ', 0)->chunk (function ($users) {
foreach ($users as $user) {
$user->update ([' Approved ' = 1]);
}
);
If you want to run the above code, there will be no error, but the where condition is to filter the user approved to 0 and then approved the value to the new 1.
In this process, the data of the first database is modified, the data of the next data block will be selected in the modified data, this time the data changed, and the page added 1. So after execution, only half of the data in the data is updated.
If not, let's take a look at the bottom-level implementation of chunk. Also take the above code as an example, if a total of 400 pieces of data, the data is divided into 100 pieces of processing.
page = 1: At the beginning of the page is 1, select 1-100 data for processing;
page = 2: At this time the approved value of the first 100 data is all 1, then in the second filter data will start from 101th, and this time the page=2, then the data will be processed 第200-300 before the data
Then still.
public function Chunk ($count, callable $callback) {$results = $this->forpage ($page = 1, $count)->get ();
while (count ($results) > 0) {//In each chunk result set, we'll pass them to the callback and then let the Developer take care of everything within the callback, which allows us to//keep the memory low for SPI
Nning through large result sets for working.
if (Call_user_func ($callback, $results) = = = False) {return false;
} $page + +;
$results = $this->forpage ($page, $count)->get ();
} return true; }