The further optimization of astar makes the binary heap faster and more powerful. As3

Source: Internet
Author: User

The Optimization of the binary heap improves the efficiency of axing, but it cannot be stopped if it wants to be faster and more powerful. In the past two days, I have sorted out my astar and obtained some source code from some of the great gods I found online. Today I want to share my thoughts with you.

Astar principle and binary optimization at tiandihuiArticleA lot. I will not talk about it here. I should read this article first. I have some knowledge of astar in advance. (The idea of this article comes from othersCode, Not my original, haha)

 

Most of the optimization is to change the time of the space and put the calculation amount of the process into the initial memory. Next, go to the topic.

I. Pre-Calculation of neighboring obstacle points.

Judging whether there are obstacles to the surrounding grids and the cost value of the calculation grids, it would be a waste of time to calculate the value of the grids at every seek, because every time we look for a path, the obstacles to each grid will not change, and the value of cost will not change. We can extract this computation to the initial map for computation. The procedure is as follows:

1. Add the adjacent node array nodelinks and the value group costlinks to the node. (You can also create a linknode object, which has two attributes: node and value. The two arrays are ing relations.) to ensure that the value of nodelinks [I] Is costlinks [I]

2. When the map is initially good, it traverses each grid, computes all non-obstacle grids around it, and adds them to the adjacent array of nodes to store them. Calculate the value of each adjacent node and map it To the value group of the node.

3. When finding the path, you can determine which grids around the node are taken directly from the nodelinks attribute of the node currently calculated. No calculation is required.

Pre-computing can save a lot of performance, but the disadvantage is that the initial time is slow.

 

Ii. Array Optimization

The indexof array and shift methods are very good at performance. Many people's astar has these two things in while, which is the reason for the several hundred milliseconds of slowness. If you are skeptical, you can write a 10 thousand large loop to call these two methods. To determine whether the node is in the account opening group, the common practice is to use indexof. In fact, this method can be replaced by the following:

1. Add the isopen: Boolean attribute to the node.

2. Set isopen to true for each push node to open the Array

3. When the node is removed from the open list, set the isopen attribute of the node to false.

4. You only need to determine whether the list is enabled for a node by checking isopen.

Similarly, if you choose to disable the service, you can also use indexof in the binary heap. The method of killing the service is omitted.

When a close group is added to the return path through the parent node, many users use shift to add nodes to the path array one by one. This is also a slow factor. It can be changed to the push method, so the push will not change the index, so the efficiency is very high. After the closed node group data is added to the path through push, the reverse is called to put the array upside down. The result is the same as shift, but it can be very different.

 

3. enable or disable the flag. Do not reset it if you change the flag.

Replacing indexof with isopen greatly improves the performance of one step. However, after each path is searched, a problem occurs, that is, to reset the node, that is, to enable and disable the isopen of all nodes in the list, set isclose to false. Haha, although the performance of this Reset food is quite small, but in the spirit of pursuit, I found a more abnormal way. The procedure is as follows:

1. Each time a star calculates a unique auto-incrementing mark markindex, each time a path is calculated, this integer property + 1

2. Change the isopen and isclose Boolean attributes in the node to the openmark and closemark integer attributes.

3. Set openmark as the automatic mark markindex for this calculation when you add the list, and set the removal list to-1 (or a random number not equal to markindex)

4. To determine whether to open the list, you only need to determine if (openmark = markindex ).

5. markindex + 1

Because the auto-increment mark is different for each computing, the node openmark does not need to be reset and can be used again next time. The performance improvement brought about by this method is very small and can be ignored without the pursuit of dozens of milliseconds.

 

4. Reduce the while get/set/Function

The get/set performance is actually very high, but in the project, a large number of grids pass through the while method is tens of thousands of operations, although each performance difference is very small, however, the efficiency is obvious. If you do not believe it, you can write a large loop and use the getter and public attributes to compare them. The performance difference is three to four times. If the F, G, H, X, and y attributes of the node use the get/set interface, the difference in milliseconds is obvious. Function is an indispensable part of code design. This optimization step is only applicable to the while use of astar, which can be avoided in normal projects.

 

5. bitwise operations to improve minor performance

In the binary heap, divide by 2, num/2 = num * 0.5 = num> 1. The three calculation methods are the fastest and obviously the third one. Although the readability is lower, however, the efficiency is still a little higher. In additionNum * 2 = num <1; can also improve performance

 

This is the method that can be used to increase the speed currently. First, record the log, and then find a new one. The source code has not been sorted out, so it is now

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.