Note: The heap here is the example of a small Gan.
We want to design a heap that can support merge operations as efficiently as a binary heap, that is, $o\left (n \right) $ time to handle a merge, and it is difficult to use only one array, right, after all, the merge operation needs to copy an array into another array, For heaps of the same size this would cost $\theta \left (n \right) $. As such, we can see that pointers are needed for all advanced data structures that support efficient merging. But! There are some problems in practice, which can make operational efficiency lower because processing pointers are generally more time-consuming than multiplication operations.
How do you do that? two Fork heap evolution--left-hand heap, also known as left-leaning tree. Like the binary heap, the left-hand heap also has structural characteristics and orderliness, with the same heap property as two forks . The only difference between his and two forks is that the left-hand heap is not ideal-balanced, but tends to be extremely unbalanced and tends to tilt to the left in topological form. So why introduce this new variant? Let's start with its design motivation and structure definition. As we said earlier, our goal is to achieve efficient consolidation, and many of the existing methods are enough to be merged, but too slow. The essence of data structure is to constantly optimize performance, so we need to introduce a new structure to accomplish this goal. The following first to analyze a variety of the head of the algorithm, and then gradually approaching the protagonist.
The most common way of thinking is based on a large, the small heap of elements taken out of the insert into a, B is empty when the complete.
Can be summed up in A nutshell: Insert (Deletemax (B), A);
But it's too slow It's just turtle speed. Analysis, we put the size of the two heap as N and mm$ times in each time deletemax (B) to spend $\log m$ time time we'll think of the Floyd batch heap algorithm, yes, the more efficient way is to mix the two stacks first, then filter it down, maintain the structure of the whole heap, and organize it into a complete binary heap . To summarize is buildheap(n+m, UnioN (A, b)); The Floyd algorithm only needs linear time, that is, a total of $o\left (n\; +\; M \right) $, this efficiency is higher.
But this is not satisfactory, because: Floyd algorithm input default is unordered, and our two piles are ordered, just now this algorithm does not use the information we know, if the use of this part of the ordered information, you can speed up the implementation. From this point of view, we have reason to believe that there must be more efficient data structures and corresponding algorithms. Indeed,Clark Allan Crane has explored , invented a new structure: the left-hand heap, and in 1972 published his doctoral dissertation $linear\; lists\; and\; priority\; queues\; As\; balanced\; \mbox{bi}nary\; t\mbox{re}es$. This structure, with a small number of conditions attached to the stack, allows only a few nodes to be adjusted during the merge process, with only the $o\left (\log n \right) $ time required to insert and delete . than just the $o\left (N\cdot \log n \right) $ and $o\left (n \right) $ have made great strides . His new condition is "one-sided tilt", where the node distribution is skewed to the left, while the algorithm's efficient trick is that the merge operation involves only the right side and few right nodes.
For example, this is the left-hand heap of the typical diagram, leftwards right short, it can be the right node tightly controlled in $o\left (\log n \right) $, which also confirms the complexity of the above-mentioned merger operation in the O(LOGN) range. This also leads to a theorem: the left-hand heap with $r $ nodes on the right path must have at least $2^{r\;} -\; 1$ a node.
Then how did it do so fast ? It is too early to discuss this, because the first answer is another question. The third natural paragraph above mentions that it is not an ideal equilibrium, but rather a tendency towards extreme imbalance. Unbalanced, the structure is gone, but what we need to understand is: for the heap, the sequence is the essence of the characteristics , the other is irrelevant, at the necessary moment can be sacrificed, after all, computer science is a trade-off knowledge.
Now let's discuss the nature of the left-hand heap and introduce a concept: 0 path Length (null path LENGTH,NPL) is defined as the shortest path length from a node x to a leaf. Inside the node is the one that is labeled. So the NPL with 0 or one son's node is 0, defined $npl\; \left (\; null\; \right) \; =\; -1$. It is natural that the NPL for each node is calculated as a formula:
$NPL \left (\; x\; \right) \; =\; \; 1\; +\; \min \left (\; npl\left (\; lc\; \right) \; \; npl\; \left (\; rc\; \right) \; \right) $
Looks a little familiar with a formula, the algorithm for tree height, and this very similar, is to change min to Max, by analogy we may have a deeper understanding of these two concepts .
With this indicator, we can measure the tilt of the heap structure. If the left child's NPL is not less than the right child, it is referred to as left-leaning (political pursuit of progressive 2333), if each node conforms to this nature, it is called the leftist heap, or the leftist heap . Also because the NPL definition is to take a small value of +1 for two children , we only think about the right side of the line. Summarized as follows:
Left-leaning: $x $ for any node, there are $npl\left (\; x->\; lc\; \right) \; \GEQ \; Npl\left (\; x->rc\; \right) $
Inference: For any node $x $, there are $npl\left (\; x\; \right) \; =\; 1\; +\; Npl\left (\; x->rc\; \right) $
We can also infer that any sub-heap of the left-hand heap must also be a left-hand heap. The third natural paragraph said that the left-hand heap tends to tilt the node to the left, but this is only a general tendency, and the actual situation is not necessarily left.
The implementation is discussed below, first the merge, then the insert and delete.
Let's start with the type declaration.
#ifndef Leftheap_h#defineLeftheap_hstructTreenode;typedefstructTreeNode *Lefheap; Lefheap Init ();intfindmin (Lefheap H); Lefheap Merge (lefheap h1,lefheap H2);//#define Insert (x,h) (H=insert1 ((X), H))voidInsert (intx,lefheap H);intdeletemin (Lefheap H); Lefheap Insert1 (intx,lefheap H); Lefheap DeleteMin1 (Lefheap H);#endif/* Leftheap_h */structtreenode{intvalue; Lefheap left; Lefheap right; intNPL;};
Recursive patterns can be used to describe the merging algorithm very concisely, for the general case:
The problem of merging A and B two heaps can be transformed into such a problem with recursion:
Specifically, we are going to take the right sub-stack of a and recursively merge with the heap B that we just made, and the resulting merge will continue to be the right child heap of a. Of course, in order to ensure that a will continue to satisfy the left-leaning, after this merger returns, we also have to compare the a_l and after the merger of this heap of the NPL value, if necessary, we also need to make the two exchange position. The recursive notation is as follows
voidSwap (lefheap h1,lefheap h2) {Lefheap temp=H1; H1=H2; H2=temp;}Staticlefheap Merge1 (lefheap h1,lefheap H2);voidSwapchildren (lefheap H1) {swap (H1->left, h1->Right );} Lefheap Merge (lefheap a,lefheap b) {//Recursive base if(!a)returnb; if(!b)returnA; /*The execution of this sentence indicates that neither heap is empty, at which point we compare the size of the two root nodes to the numeric value, and if necessary, swap the names with each other. This ensures that a is always not less than B in value, so that B is a descendant of a in the subsequent recursive process. */ if(A->value < b->value) Swap (A, b); //in general, first make sure a is larger and then perform the mergeA->right=merge (a->Right , B); //then we have to guarantee the left-leaning nature of a: if(!a->left | | A->LEFT->NPL < a->right->NPL)//if necessary, we will swap the left and right sub-stacks of a to make sure the lowerSwapchildren (a); //then update the NPL of aa->npl=a->right->npl+1; returnA//returns the merged heap top}
Specific examples are as follows:
Eventually:
It is important to note that after merging, the original two heaps are not touched, as their own changes affect the results of the merge. The time to perform the merge is proportional to the sum of the length of the right path, because each accessed node performs constant work during recursion. Therefore, the time limit for merging is $o\left (\log n \right) $, or it can be done in a non-recursive way for two trips: the first trip is to build a new tree by merging the right path of the 2 heaps . For this reason we want to arrange the nodes on the right path in ascending order (or descending, anyway) , keeping the left child intact. In this example, the new right-hand path is 3,6,7,8, and. The second trip constitutes the left-hand heap, which is exchanged on those nodes whose nature has been destroyed, exchanging two children of these nodes.
For insertions, you can think of the insert as a single-node heap and perform a merge once.
void Insert (int x,lefheap H) { lefheap fresh; Fresh=malloc(sizeof(struct TreeNode)); Fresh->value=x; Fresh->npl=0; Fresh->left=fresh->right=NULL; H=Merge (fresh, H);}
Delete the word, is to get rid of the root of the two heap, and then merge, so time is still $o\left (\log n \right) $
int deletemin (lefheap H) { lefheap l=h-> left; Lefheap R=h-> right; int t=h->value; // The first three sentences are the bedding, the relevant data for backup only. free (H); the physical removal of the root node is done by this sentence. // after that, the Zoko and the right sub-heap, which are now isolated, are simply re-merged. h=Merge (L, R); return t;}
As you can see, in this way, whether the deletion of the left-hand heap or just the insert operation, the real calculation is nothing more than the merge interface. Previously introduced, the merger can be efficient in $o\left (\log n \right) $ time to complete, so that the implementation of the deletion and just implemented insert operation can also achieve such computational efficiency. The same computational efficiency, a more concise approach to implementation, what reasons do we not use this way?
In fact, on the way of separation, the inventor of the left-hand heap crane is a master. In addition to the left-hand heap, he also gives efficient merging algorithms for many other data structures. For example, we are already familiar with the AVL tree, Crane also gives an efficient merging algorithm, interested in looking for relevant articles.
The next article discusses two queues, unlike in the past, it is not a sequential tree, but a forest.
P.S. This is the time to prepare for the TOEFL, so the next article will probably be sent around November.