In the normal queue, the order of the elements out of the team is determined by the time the element is enqueued, that is, who first team, who first out. But sometimes we want to have such a queue: who first team is not important, what is important is who's "high priority", the higher the priority, the first. Such data structures are called priority Queues , which are often used in special applications, such as the Scheduler for operating system control processes.
So, How is the priority queue implemented? we can quickly give three solutions.
1. Using the linked list, insert operation selection is inserted directly into the table header, the time complexity is O (1), the outbound operation will traverse the entire table, find the highest priority, return and delete the node, the time complexity of O (N).
2. Using a linked list, the elements in the list are prioritized by priority, the insertion operation is to find the exact position of the insertion node, the time complexity is O (N), the outbound operation returns and deletes the table header directly, and the time Complexity is O (1).
3. Using the two-fork lookup tree, the insertion operation time is O (Logn), the exit operation returns the largest (or smallest, priority-defined) node in the tree and deletes it, and the time complexity is O (Logn), but the team will tend to make the tree unbalanced.
If you decide to use a linked list, you must decide whether to use Method 1 or Method 2 based on the scale of the insert operation and the out-of-band operation.
If you decide to use a two-fork lookup tree, you are actually a bit "overkill" because it supports operations that are much more than just inserting and removing (that is, deleting the largest or smallest nodes). And a two-tree with n nodes has a 2N pointer field, but only N-1 (except the root node, each node must have a pointer to itself), that is, there must be a n+1 pointer field is NULL, that is, "wasted" dropped. Of course, its time complexity is more balanced.
But today we will use a new data structure to implement the priority queue, it can also be implemented with O (LOGN) insertion and exit, and do not need to use pointers, this data structure is called-two fork heap.
Before we discuss the binary heap, we decide on our priority setting, we assume that the priority of the element is a positive integer, and that the smaller the value the higher the priority (which can be handy for us to implement the two fork heap later).
The two fork heap is a fully binary tree in the logical structure , and a complete binary tree is a two-tree that meets the following conditions:
1. After removing the lowest (i.e. the deepest) node, it is a tree full of two forks
2. The bottom node must be logically "left-to-right" to fill in, not empty
Which is a completely binary tree
The most important feature of a fully binary tree is that it can be stored using an array (and not by a cursor array), and its principle is simple: the root node is stored at subscript 1, and the parent node of any other node is its own subscript i/2 (if I is odd, the quotient takes the integer part directly, This is very simple in code, the left child of any node is subscript i*2, and the right child is i*2+1.
So far, we have identified two things, one, a binary heap is a completely binary tree; Second, a fully binary tree can be stored in an array, that is, the binary heap can be stored in an array.
We have now implemented a good "no pointers", and the next question is how to meet the requirements of the priority queue and make two operations satisfy O (LOGN). Before that, we first assumed the node structure and gave a two-fork heap storage structure, initializing the program:
// two fork heap structure definition struct binaryheap { int//capacity represents the maximum capacity of the binary heap int size; // a size that represents the current binary heap, that is, the number of elements int *heap; // The heap is an "array", initialized based on the size given at initialization struct binaryheap *priorityqueue; // Priorityqueue is the priority queue
Priorityqueue Initialize (unsignedintcapacity) {Priorityqueue Ppqueue= (Priorityqueue)malloc(sizeof(structbinaryheap)); Ppqueue->heap = (unsignedint*)malloc(sizeof(int)*capacity); Ppqueue->capacity =capacity; Ppqueue->size =0; Ppqueue->heap[0] =0;//make heap[0] 0 can avoid inserting a filter head on a newly-inserted element, and learn to insert it . returnPpqueue;}
So, how does a binary heap meet the priority queue requirements? This has to be from the two fork heap to the point of demand, in the binary heap node has and only two requirements:
1. Highest root node priority
2. Any node priority is higher than its child
, only the complete binary tree on the left meets the requirements of the binary stack, and the right node 6 does not meet the requirements of the binary heap.
Next, with these two requirements, let's see how we can implement the insertion of a two-fork heap. Now, suppose we already have the following two forks and a new node 14.
The array is stored as follows
First of all, we want to make sure that the new node inserted after the binary heap is still a complete binary tree, the way to ensure this is very simple, is to allow the new node to be inserted into the complete binary tree of the last layer of the right side of the rightmost element, directly, is inserted into the current array of the last position of the element.
Then we have to let the new node go where it should be, or exactly what it should be, and this is very simple: make the new node compare to the parent node, and if the new node has a greater priority, it will swap positions with the parent node until the new node has precedence over the parent node . This strategy is called "filtering" (The Hollow node is the new node).
The insert procedure array is shown below:
Once you know the idea of inserting, the inserted code is not difficult to write:
BOOLInsert (Priorityqueue ppqueue, unsignedintx) { //due to the heap[0 of the binary heap] is discarded, so size max is capacity-1 if(Ppqueue->size = = ppqueue->capacity-1|| x = =0|| X >Int_max)return false; //CurPos is the current position, initialized to the inserted two-fork heap size, which is the footerUnsignedintCurPos = ++ppqueue->size; //constantly make curpos corresponding to the parent node and x, if greater than X to make the parent node filter, equivalent to the X filter//exit the loop if it is less than X, at which point the position of X should be CurPos for(;pP Queue->heap[curpos/2] > x; CurPos/=2) {Ppqueue->heap[curpos] = ppqueue->heap[curpos/2]; } ppqueue->heap[curpos] =x; return true;}
Notice that if the CurPos is 1, which is the root, then heap[0] will be compared with X, in order to avoid X to filter over to heap[0], we asked before X must be a positive integer, and heap[0] is set to 0 at initialization, so heap[0] must be less than any of the insertion element
A little analysis can be seen, the worst case of insertion is only the new node filter to the root, when the new node filter on the path to the binary tree inserted a leaf node is similar, the time complexity of O (LOGN)
Now let's take a look at how the binary stack is implemented by the team. It is very simple to find the highest priority node in the binary heap, which is the root node. But after taking the root node, the place becomes an "empty node", and how should this "empty node" be treated? The simple idea is to keep the "empty node" children in the higher priority and the "Empty node" exchange until the "empty node" to the lowest level. But the idea is prone to error, such as empty nodes that eventually result in the destruction of the complete binary tree attribute.
So how to ensure the binary heap of the complete binary tree properties? The solution is to make a slight improvement to the above idea: after the root node is deleted, the last node of the binary heap is replaced by the position, and then the layer is "filtered" to its priority than all of its children. in this way, the full binary tree attribute of the binary heap can be saved. Because of this, even if the "New root node" is filtered to the bottom, it will not cause the "empty node" to break up the full binary tree attribute. (The Hollow Junction is the original end of the table.)
(The array change of the operation of the team is slightly)
Knowing the team's thinking, the team's code is not difficult to write:
UnsignedintDequeue (Priorityqueue ppqueue) {//returns 0 if the heap is empty, 0 must not be an element in the table if(Ppqueue->size = =0) return 0; unsignedintroot = ppqueue->heap[1];//Root holds the original heap root, which is the value that needs to be returnedUnsignedintLastelement = ppqueue->heap[ppqueue->size--];//lastelement is the footer element//The lastelement is filtered from the root, so the CurPos is initialized to 1,child to indicate that the CurPos two children have a higher priority.UnsignedintCurPos =1; unsignedintChild = CurPos *2; while(Child <= Ppqueue->size) { //if child is not the last element, and her brother (CurPos's right child) has a higher priority, she points to curpos right kid if(Child! = Ppqueue->size&&ppqueue->heap[child] > ppqueue->heap[child +1]) child+=1; //Compare Lastelement with CurPos the highest priority child, if lastelement more priority, then the cycle ends//otherwise make CurPos the highest priority child filter, equivalent to make lastelement under the filter if(Ppqueue->heap[child] <lastelement) {Ppqueue->heap[curpos] = ppqueue->Heap[child]; CurPos=Child ; Child= CurPos *2; } Else Break; } //The CurPos after jumping out of the loop is the position of the lastelement.Ppqueue->heap[curpos] =lastelement; returnRoot;}
The time complexity of the team is the same as the queue (insert), O (Logn).
With the above code, the binary heap is basically implemented (the destroy code is not given, but the implementation is not difficult). So what else is the use of a two-fork heap, or a priority queue (that is, a heap, but not just a binary heap, and other implementations, all called heaps or priority queues) ?
Imagine if we were to insert a set of sorted data into a two-fork heap, and then constantly dequeue and insert the resulting element (the root of the binary heap) into the normal queue, would we get an orderly queue? That is, a two-fork heap can be used to sort work ! So how much time does it take to complete a two-fork heap? Approximate insertion time + out of team time, i.e. O (N*logn+n*logn), O (N*logn). This is a much better time than most of us know about bubble sorting and selection sequencing. We will complete the implementation of the heap sequencing after the blog post.
The following address has a simple implementation and test of a two-fork heap, while showing the sorting effect of a two-fork heap
Https://github.com/nchuXieWei/ForBlog-----Binaryheap
Comprehensible data Structure C language version (15)--priority queue (heap)