In Split natural number: pure while implementation (Part 1-Train of thought) in this article, I provide a solution to the question of Jeff's programming Practice: Splitting the natural numbers, and using two examples to explain this idea, do not know if you have successfully used this idea to solve the problem?
First of all, what is the search domain for this problem? That's [min, Min, min, ..., Min] to [Max, Max, Max, ..., Max], or a subset of it. Even if you don't know the algorithm at all, I believe you know by intuition that you want to search between [min, Min, min, ..., Min] to [Max, Max, Max, ..., Max]. Therefore, an algorithm is good, it depends on whether you can effectively narrow the search field.
Without narrowing the search field, you can arrange all the possibilities within the search domain to verify whether it is the solution, which is what Jeff's dosimple example did. Then, in order to maintain the repeatability of the solution, you may think that the effective solution must be a sequence of no descent, so that all non-descending sequences are pruned, which is what Jeff's dobetter example does. Then you need to cut off the sum not equal to sum in the not descending sequence, and this pruning is the hardest one to prune, which is what Dobest does.
Jeff's practice in Dobest is to obtain itemmininclusive and itemmaxinclusive for the first position of the current operation, and MinInclusive <= itemmininclusive Itemmaxinclusive <= maxinclusive, of which [MinInclusive, Itemmininclusive] and (Itemmaxinclusive, maxInclusive] The values of these two intervals are invalid, and only the values of [itemmininclusive, itemmaxinclusive] intervals are valid.
The expansion of the
Dragonpig takes a try-way pruning, although it also cuts off the invalid branch, but the first invalid node to traverse the invalid branch on the loop is to try and see the return false before cutting it, so the performance is between Dobetter and Dobest. Xu Shaoxia of the expansion of the effective pruning, in time complexity is consistent with the dobest, but the cache data occupy space for the actual state of the space required 3 times times, these spaces are actually wasted. I think the ideal approach is simply to cache the sum value, and not cache the lower bound (that is, itemmininclusive) and the upper bound (that is, itemmaxinclusive), which is to ensure performance of the premise of the most space-saving. As for why I said that the lower and upper bounds are useless, look at my writing to know:
function Main (m, N, Min, max) {
var array = new Array (n);
var i = 0;
var write = function () {
Console.log (m + ' = ' + array.join (' + '));
};
var scan = function () {
if (scan.start) {
Scan.start = false;
scan.sum = 0;
}
Scan.sum + = Array[i];
Return (Array[i] > Array[n-1]-2);
};
var step = function () {
array[i]++;
Fill.sum = Scan.sum-array[i];
i++;
};
var fill = function () {
while (I < n) {
Array[i] = Math.max (Fill.sum-(N-i-1) * max), ( (i = = 0)? MIN:ARRAY[I-1]));
Fill.sum-= array[i];
i++;
}
i--;
};
Fill.sum = m;
Fill ();
Write ();
while (true) {
Scan.start = true;
while (I >=0 && Scan ()) {
i--;
if (I < 0) {
break;
Step ();
Fill ();
Write ();
}
}