**Recursive type**

Recursion is closely related to divide-and-conquer methods, since the running time of the divide-and-conquer algorithm can be depicted naturally by using recursion. A **recursive formula** is an equation or inequality that describes a function by a smaller number of functions on the input. For example, in section 2.3.2, we describe the worst-case run time t (n) of the Merge-sort process with recursion:

Θ (1) if n=1

T (N) = (4.1)

2T (N/2) +θ (n) if n>1

Solution available T (n) =θ (NLGN)

Recursion can take many forms. For example, a recursive algorithm might divide the problem into sub-problems of unequal scale, such as 2/3-to-1/3 partitioning. If both the decomposition and the merge steps are linear time, such an algorithm yields a recursive T (n) =t (2N/3) +t (N/3) +θ (n).

The scale of the sub-problem does not have to be a fixed proportion of the original problem size. For example, a recursive version of a linear lookup produces only one sub-problem, and the problem is only one element less than the size of the original problem. Each recursive call will take the constant time plus the time of the next level of recursive invocation, so the recursive type is T (n) =t (n-1) +θ (n).

This chapter introduces three methods for solving recursion, that is, the method of obtaining the "Θ" or "O" asymptotic bounds of the algorithm:

* **Substituting method** We guess a boundary, and then we use mathematical induction to prove that the boundary is correct.

* Recursive **tree method** converts recursive to a tree whose nodes represent the cost of different levels of recursive invocation. Then the boundary and technique are used to solve the recursive

* The **Main method** can solve the recursive bounds of the formula as follows:

T (n) = at (n/b) + f (n)

where A>=1,B>1,F (n) is a given function. This form of recursion is common, and it depicts such a divide-and-conquer algorithm: to generate a sub-problem, the size of each sub-problem is the 1/b of the original problem scale, the decomposition and consolidation steps take a total of time F (n).

We occasionally encounter recursive formulas that are not equations but inequalities, such as T (n) <=2t (N/2) +θ (n). Because such a recursive method only describes an upper bound of T (n), the solution can be described with a large O symbol instead of a Θ symbol.

**Recursive technical details**

In practical applications, we ignore some technical details of recursive declarations and solutions. For example, if Merge-sort is called on n elements, and when n is odd, the size of the two sub-problems is the upper bound of N/2 and N/2, respectively, and not exactly N/2, because when n is odd, N/2 is not an integer.

When declaring, solving recursion, we often ignore rounding down, rounding up the boundary condition. We ignore these details first and then determine whether the details have a greater impact on the results. It usually doesn't make much difference, but you need to know when it won't affect much. This aspect can be judged by experience, and on the other hand, some theorems also indicate that for many recursive methods that characterize the divide-and-conquer algorithm, these details do not affect its asymptotic bounds.

Find-max-crossing-subarray (A,low,mid,high)

1 left-sum =-∞

2 sum = 0

3 for i = Mid Downto low

4 sum = sum +a[i]

5 if sum > left-sums

6 left-sum = Sum

7 Max-left = i

8 Right-sum =-∞

9 sum = 0

Ten for j = Mid +1 to high

sum = sum + a[j]

If sum > Right-sum

Right-sum = Sum

Max-right = J

Return (Max-left,max-right,left-sum + right-sum)

The way this process works is described below. 第1-7 to find out the left half of the A[low. The largest sub-array of mid]. Since this array must contain A[MID], the loop variable I for the For loop of the 3–7 line starts from mid and decrements until it reaches low, so that every sub-array it examines has a a[i: Mid] form. 1–2 rows Initialize the variable left-sum and SUM, the former saves the largest and the most found so far, and the latter saves the a[i. The and of all values in mid]. Every 5th row finds a subarray of a[i. Mid] and greater than left-sum, we will update the left-sum to this subarray in line 6th, and update the variable max-left in row 7th to record the largest sub-array of the current subscript I 第8-14 the right half A[mid_1..high], similar to the left half. Here, the loop variable J of the For Loop of the 第10-14 line starts from mid+1 and increments until it reaches high, so that every sub-array it examines has a a[mid+1..j] form. Finally, line 15th returns the subscript max-left and Max-right, delimits the boundary of the largest subarray across the midpoint, and returns the Subarray a[max-left. Max-right] and Left-sum + right-sum

If the sub-array A[low: High] contains n elements (that is, n = high-low+1), then call Find-max-crossing-subarray (A,low,mid,high) takes θ (1) time, we only need to count how many iterations have been executed altogether. The For loop of the 3–7 line performs a mid-low+1 iteration, and the For loop of the 第10-14 row performs high-mid iterations, so the total number of cycles is

(mid-low+1) + (High-mid) =high-low+1=n

With a linear time find-max-crossing-subarray in hand, we can design the pseudo-code of the divide-and-conquer algorithm that solves the problem of maximal subarray:

Find-maximum-subarray (A,low,high)

1 if high = = Low

2 return (Low,high,a[low])//base case:only one element

3 Else Mid=╘ (Low+high)/2╛

4 (left-low,left-high,left-sum) =

Find-maxmum-subarray (A,low,mid)

5 (right-low,right-high,right-sum) =

Find-maxmum-subarray (A,mid+1,high)

6 (cross-low,cross-high,cross-sum) =

Find-maxmum-subarray (A,low,mid,high)

7 If Left-sum>=right-sum and Left-sum>=cross-sum

8 Return (Left-low,left-high,left-sum)

9 ElseIf right-sum>=left-sum and Right-sum >= cross-sum

Ten return (Right-low,right-high,right-sum)

One else Retun (cross-low,cross-high,cross-sum)

The initial call Find-maxmum-subarray (A,1,a.length) will find the largest sub-array of A[1..N].

**Analysis of divide and conquer algorithm**

Assume that the size of the original problem is a power of 2, so that all sub-problems are integers in size. We use T (n) to indicate that Find-maxmum-subarray solves the run time of the largest subarray of n elements. First, the 1th line takes a constant amount of time. For the basic situation of n=1, it is also very simple: the 2nd line takes a constant time, so

T (1) =θ (1) (4.5)

When N>1 is a recursive condition. Lines 1th and 3rd take constant time. The sub-problems solved by rows 4th and 5th are N/2 of the elements (assuming that the original problem is a power of 2, the N/2 is guaranteed to be an integer), so the solution time for each problem is T (N/2). Because we need to solve two sub-problems---left and right sub-arrays, Thus, lines 4th and 5th add 2T (N/2) to the total run time. And as we've seen before, line 6th calls Find-crossing-subarray to spend θ (n) time, and 第7-11 line only takes θ (1) time. So, for the recursive case, we have

T (n) =θ (1) +2t (N/2) +θ (n) +θ (1) =2t (N/2) +θ (n) (4.6)

Combined (4.5) and (4.6), we get the recursive Find-maxmum-subarray run time t (N):

Θ (1) if n=1

T (N) = (4.7)

2T (N/2) +θ (n) if n>1

This recursion is the same as the recursive type (4.1) Merge sort

The maximal sub-array problem of divide and conquer strategy