Optimization and improvement of a * path finding algorithm

Source: Internet
Author: User

Based on the study of the previous * search algorithm, we should have some understanding of a * pathfinding, but in practical application, some improvement and optimization of the algorithm is needed.

Iterative deepening Depth-first search-Iteration deepening Depth-first search

In depth-first search a comparison of the pit-daddy situation is that there is no search results on one branch of the tree, but it is very deep, even bottomless, so that the results are not found at all. In order to prevent this situation, there appeared the thought of iterative deepening.


Iterative drill-down search (iterative deepening search, IDS) or (iterative deepening depth-first search, iterative deepening Depth-first search) is a common search mechanism, Often used in depth-first search. By gradually increasing the depth limit of the depth limit search (depth-limited Search)--starting from 1, then 2, until the target node location is found-- Iterative Drill down search can find the best depth limit. Depth limit search refers to the introduction of depth limiting limit in depth-first search, and if the depth of the root node to node n is limit, then n is treated as a leaf node with no sub-nodes.

Find a tree to see for yourself.


The EE node needs to be found.

To set the depth limit to 1 first, the tree to search is


Depth-First search, no search, increased depth limit of 2.


For DFS or no search, continue to increase the depth limit to 3.


Carry Dfs,bingo, find.

And just like Dfs, looks silly? No!

This method first avoids the initial problem of Dfs depth, compared to BFS generally need to store all the nodes generated, the storage space is much higher than the depth priority, id-dfs its memory footprint is much less. Take a look at the complexity of the time.


Iterative drilldown may seem like a waste of time, because states can be generated more than once. But this is not the case because most of the nodes are at the bottom and the bottom nodes are rarely searched. For a tree with a depth of D and a branching factor of B, the maximum number of nodes to search for is:

So the time complexity of the iterative drill-down search is the same as the depth-first search, O (bk)

So ... This algorithm is a compromise between two search methods, bfs can be searched, it will be able to search, and not so much space, just a little bit of time (the same node may have to visit many times).


Here is a general example of a comparison of three algorithms.




Iterative Deepening A *

Iterative Extension A * can be abbreviated to ida*, with the idea of iterative extension, where the bounding is no longer the depth of the tree, but the value of f (n) .

First or starting from S, the value of the bounding is F (s), followed by the depth-first search from S, the F () value of about the threshold of all regardless, not found to increase the threshold value, and then Dfs .... Until finally the final node is found.


Obviously, Ida* replaces the open list and the close list with a depth-first search, reducing the overhead of memory and the cost of List maintenance. Although every search starts from scratch, it seems a bit unreasonable to search for the same node over and over again, but the cost is much lower than maintaining the open list and the close list in the original version.


Edge Search *

A * algorithm's biggest performance problem is the open list and the close list maintenance, ida* biggest problem is unable to remember the maintenance history, will repeat the search node, a new pathfinding algorithm-Edge search a*,a* and ida* compromise.

It maintains two List,now and later to record the edge points of the search, and uses the idea of ida* to move forward with a specific look at the pseudo-code.

now-linked List of search nodes, list order determines order of evaluationlater-linked list of search Nodesroot-star  T nodethreshold = Root ' s g () push root into Nowwhile "Now" emptyfor each node in nowif node = = goalstopif node ' s f () > Thresholdpush node onto end of laterelseinsert children of node into now behind Noderemove node from now and Discardpush Later onto now, clear laterset threshold = minimum g () found this is higher than current threshold

Now is the node that the yourselves needs to be evaluated, and the later is the node that will be evaluated the next time it is stored.

This process maintains the list in a weaker order, and effectively expands the node in a way that is like Ida* 's depth first, if the target is not found after a complete traversal of now, the threshold value is added, the later list becomes the now list, and the search starts at the vertex of the Today list. While the search process requires the maintenance of the now and later lists, there is no sorting overhead, and memory consumption is much less than a *.


Reference

Artificial Intelligence 3.7-http://www.cs.ubc.ca/~poole/aibook/html/artint_62.html

Iterative deepening-http://www.comp.lancs.ac.uk/computing/research/aai-aied/people/paulb/old243prolog/ Subsection3_6_4.html

Game Programming Gems 3.7

Optimization and improvement of a * path finding algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.