Traveling salesman problem (TSP) Problem Solving report

Source: Internet
Author: User

Traveling Salesman Problem is a well-known problem. This time it was because of the algorithm class selected by Coursera and the code implementation was written. Here is a summary.

Test procedure:

25
20833.3333 17100.0000
20900.0000 17066.6667
21300.0000 13016.6667
21600.0000 14150.0000
21600.0000 14966.6667
21600.0000 16500.0000
22183.3333 13133.3333
22583.3333 14300.0000
22683.3333 12716.6667
23616.6667 15866.6667
23700.0000 15933.3333
23883.3333 14533.3333
24166.6667 13250.0000
25149.1667 12365.8333
26133.3333 14500.0000
26150.0000 10550.0000
26283.3333 12766.6667
26433.3333 13433.3333
26550.0000 13850.0000
26733.3333 11683.3333
27026.1111 13051.9444
27096.1111 13415.8333
27153.6111 13203.3333
27166.6667 9833.3333
27233.3333 10450.0000

Note: This is the two-dimensional coordinate of 25 cities. The distance between cities is Euclidean distance. If you want to start from a city, each city will go only once, and then return to the shortest path of the original city.

Background: http://en.wikipedia.org/wiki/travelling_salesman_problem. Wikipedia summarized the history, exact solution, and approximate solution of TSP.

According to the question requirements, I need a precise solution. Wikipedia is not very specific. A precise solution that everyone may know is to use dynamic planning, with a time complexity of O (N ^ 22 ^ N ). Such a time and space are exponential complexity solutions, and it is very difficult to deal with more than 20 nodes. I am not very clear about its implementation, so I did not use it. Later, branch and bound were used. In fact, it is search and pruning. If the lower bound of a partial decomposition has exceeded the current optimal solution, the branches to be decomposed do not have to be processed. It can be seen that for an object with n! To solve the space problem, whether the pruning is effective (of course, the premise is correct) has a great impact on the algorithm's time and space consumption.

Two useful materials I personally think are:

1. http://www.academic.marist.edu /~ Jzbv/algorithms/branch?20and=20bound.htm

2. http://lcm.csa.iisc.ernet.in/dsa/node187.html

In brief, the partial decomposition refers to determining the path of some cities, and some cities are not explored. The lower bound of the partial decomposition is estimated to be worth the Determined Path Length plus the minimum possible value of the path of an uncertain city. The lower bound of the decomposition. That is, the lower bound of the minimum values of all possible complete solutions.

The method in two link 1 is very simple. For a partial decomposition, the method used to estimate its lower bound is, the sum of the currently determined path length and the minimum adjacent path length of the city without a specific path. This method is easy to implement and can give people confidence, but the benefits are also here. For the TSP Problem in the 25 cities, I ran for 24 hours and did not get the result... Java code implementation is provided in the Link. For the C ++ implementation, see the following. Because no results are running out, the correctness of my program cannot be guaranteed.

#include <iostream>#include <cassert>#include <climits>#include <vector>#include <queue>#include <ctime>using namespace std;const int N = 25;int n;double cities[N][2];double dis[N][N];double minedge[N];struct TSPNode{vector<int> path;double plen;double bnd;TSPNode(){plen = 0;bnd = -1;}double pathlen(){return plen;}int size(){return path.size();}int lastcity(){if(size() < 1){fprintf(stdout, "error: path is empty.\n");return -1;}return path[path.size() - 1];}void add(int city){path.push_back(city);}void copypath(const vector<int> &p){path = p;}bool contains(int city){for(int i = 0; i < path.size(); ++i){if(path[i] == city){return true;}}return false;}double bound(){if(bnd >= 0){return bnd;}bnd = plen;bool marked[N];memset(marked, false, sizeof(marked));for(int i = 0; i < path.size() - 1; ++i){marked[path[i]] = true;}for(int i = 0; i < n; ++i){if(!marked[i]){bnd += minedge[i];}}return bnd;}bool operator<(TSPNode& other){return bound() < other.bound();}};class CompareTSPNode {public:bool operator()(TSPNode& t1, TSPNode& t2){if(t1.bound() - t1.size()> t2.bound() - t2.size()) {return true;}//if(abs(t1.bound() - t2.bound()) < 1 && t1.size() < t2.size())//{//return true;//}return false;}};int main(){FILE *fin = fopen("tsp.txt", "r");freopen("log.txt", "w", stdout);fscanf(fin, "%d", &n);assert(n == N);for(int i = 0; i < n; ++i){fscanf(fin, "%lf%lf", &cities[i][0], &cities[i][1]);}//for(int i = 0; i < n; ++i)//{//fprintf(stdout, "%d: (%.4lf, %.4lf)\n", i, cities[i][0], cities[i][1]);//}for(int i = 0; i < n; ++i){//dis[i][i] = 0;for(int j = i + 1; j < n; ++j){dis[i][j] = sqrt((cities[i][0] - cities[j][0]) * (cities[i][0] - cities[j][0])+ (cities[i][1] - cities[j][1]) * (cities[i][1] - cities[j][1]));dis[j][i] = dis[i][j];}}int initialpath[] = {0, 1, 5, 10, 9, 11, 14, 18, 17, 21, 22, 20, 19, 24, 23, 15, 16, 13, 12, 8, 6, 2, 7, 3, 4, 0};double infinity = 0;for(int i = 0; i < n; ++i){infinity += dis[initialpath[i]][initialpath[i + 1]];}fprintf(stdout, "%lf\n", infinity);for(int i = 0; i < n; ++i){dis[i][i] = infinity;minedge[i] = infinity;for(int j = 0; j < n; ++j){if(dis[i][j] < minedge[i]){minedge[i] = dis[i][j];}}}//priority_queue<TSPNode, vector<TSPNode>, CmpTSPNode> pq;priority_queue<TSPNode, vector<TSPNode>, CompareTSPNode> pq;//priority_queue<TSPNode> pq; TSPNode node;node.add(0);//node.bnd = node.calbound();pq.push(node);time_t start;time(&start);//cout<<"start: "<<start<<endl;time_t before = start, now;double mincost = infinity;while(!pq.empty()){TSPNode node = pq.top();pq.pop();if(node.bound() >= mincost){continue;}fprintf(stdout, "size: %d, bound: %.4lf\n", node.size(), node.bound());fprintf(stdout, "path: ");for(int i = 0; i < node.size(); ++i){fprintf(stdout, "%d,", node.path[i]);}fprintf(stdout, "\n");if(node.size() == n - 1){int remaining;for(int i = 0; i < n; ++i){if(!node.contains(i)){remaining = i;break;}}double cost = node.plen + dis[node.lastcity()][remaining] + dis[remaining][0];if(cost < mincost){mincost = cost;fprintf(stdout, "mincost: %.4lf\n", mincost);time(&now);fprintf(stdout, "[%d secs].\n", now - start);}continue;}for(int i = 0; i < n; ++i){if(!node.contains(i)){TSPNode newn;newn.path = node.path;newn.add(i);newn.plen = node.plen + dis[node.lastcity()][i];//newn.bnd = newn.calbound();if(newn.bound() < mincost){pq.push(newn);}}}}return 0;}

Method 2's estimation of the lower bound is more accurate and complex. The edges in the existing path are no longer estimated as the edges of unused points. For each vertex, the half of the two smallest edges is obtained, which is more accurate than only one edge. For detailed analysis, see the link (or Java implementation) and the judgment on the incorrect status (such as generating loops ). This is my final method for finding the correct solution. See the following for Java code implementation.

import java.util.Iterator;import java.util.LinkedList;import java.util.List;public class TSPBranchAndBound {private int N;private double mincost;private List<Integer> tour;private double[][] dis;class TSPNode {Digraph include;Digraph exclude;double length;DirectedEdge next;public TSPNode(){include = new Digraph(N);exclude = new Digraph(N);length = 0;next = new DirectedEdge(0, 1, dis[0][1]);}public TSPNode(TSPNode node){include = new Digraph(node.include);exclude = new Digraph(node.exclude);length = node.length;next = new DirectedEdge(0, 1, dis[0][1]);}private DirectedEdge next(DirectedEdge edge){int v = edge.from();int w = edge.to() + 1;if (w == v){w += 1;}if (w >= N){w = 0;v += 1;}if (v >= N){next = null;} else{next = new DirectedEdge(v, w, dis[v][w]);}return next;}public void include(DirectedEdge edge){include.addEdge(edge.from(), edge.to());length += dis[edge.from()][edge.to()];next = next(edge);}public void exclude(DirectedEdge edge){exclude.addEdge(edge.from(), edge.to());next = next(edge);}public List<Integer> tour(){List<Integer> cities = new LinkedList<Integer>();cities.add(0);int v = 0, w;while (true){Iterator<Integer> iter = include.adj(v).iterator();if (iter.hasNext()){w = iter.next();if (iter.hasNext()){return null;}if (w == 0){if (cities.size() == N){return cities;}return null;}cities.add(w);v = w;} else{return null;}}}public int size(){return include.E();}public double bound(){int[] in = new int[N];int[] out = new int[N];for (int i = 0; i < N; ++i){in[i] = -1;out[i] = -1;}for (int i = 0; i < N; ++i){Iterator<Integer> iter = include.adj(i).iterator();if (iter.hasNext()){int w = iter.next();out[i] = w;in[w] = i;}}boolean[][] inexclude = new boolean[N][N];boolean[][] outexclude = new boolean[N][N];for (int i = 0; i < N; ++i){Iterator<Integer> iter = exclude.adj(i).iterator();while (iter.hasNext()){int w = iter.next();outexclude[i][w] = true;inexclude[w][i] = true;}}double bnd = 0;for (int i = 0; i < N; ++i){if (in[i] == -1){double minin = -1;for (int j = 0; j < N; ++j){if (j == i || inexclude[i][j]){continue;}if (minin < 0 || dis[j][i] < minin){minin = dis[j][i];}}if (minin < 0){throw new RuntimeException("has no possible inbound vertex.");}bnd += minin;}else {bnd += dis[in[i]][i];}}for (int i = 0; i < N; ++i){if (out[i] == -1){double minout = -1;for (int j = 0; j < N; ++j){if (j == i || outexclude[i][j]){continue;}if (minout < 0 || dis[i][j] < minout){minout = dis[i][j];}}if (minout < 0){throw new RuntimeException("has no possible outbound vertex.");}bnd += minout;}else {bnd += dis[i][out[i]];}}bnd /= 2;return bnd;}public boolean isValid(){// check less than two adjacent edges(in and out) include for every// vertexint[] out = new int[N];int[] in = new int[N];for (int i = 0; i < N; ++i){Iterator<Integer> iter = include.adj(i).iterator();if (iter.hasNext()){out[i]++;int w = iter.next();in[w]++;if (in[w] > 1){return false;}if (iter.hasNext()){return false;}}}// check include + not exclude edges equal or larger than two for// every// vertexint[] inexclude = new int[N];int[] outexclude = new int[N];for (int i = 0; i < N; ++i){Iterator<Integer> iter = exclude.adj(i).iterator();while (iter.hasNext()){outexclude[i]++;int w = iter.next();inexclude[w]++;}}for (int i = 0; i < N; ++i){if (inexclude[i] > N - 2){return false;}if (outexclude[i] > N - 2){return false;}}// check cycle not exist or length equals N for includeDirectedCycle finder = new DirectedCycle(include);if (finder.hasCycle()){int cnt = 0;for (int v : finder.cycle()){cnt++;}if (cnt < N){return false;}}return true;}}public TSPBranchAndBound(String file){In in = new In(file);N = in.readInt();dis = new double[N][N];double[][] cities = new double[N][2];for (int i = 0; i < N; ++i){cities[i][0] = in.readDouble();cities[i][1] = in.readDouble();}for (int i = 0; i < N; ++i){for (int j = i + 1; j < N; ++j){dis[i][j] = Math.sqrt(Math.pow(cities[i][0] - cities[j][0], 2)+ Math.pow(cities[i][1] - cities[j][1], 2));dis[j][i] = dis[i][j];}}/*for(int i = 0; i < N; ++i){StdOut.print(i + ": ");for(int j = 0; j < N; ++j){StdOut.print((int)dis[i][j] + "\t");}StdOut.println();}*//*int initialpath[] = {0, 1, 5, 9, 10, 11, 14, 18, 17, 21, 22, 20, 16, 19, 24, 23, 15, 13, 12, 8, 6, 2, 3, 7, 4, 0};//int initialpath[] = {0, 1, 5, 10, 9, 11, 14, 18, 17, 21, 22, 20, 19, 24, 23, 15, 16, 13, 12, 8, 6, 2, 7, 3, 4, 0};//int initialpath[] = {0, 1, 4, 3, 2, 0};double cost = 0;for (int i = 0; i < initialpath.length - 1; ++i){cost += dis[initialpath[i]][initialpath[i + 1]];}StdOut.println("length: " + initialpath.length + ", cost: " + cost);*//* * mincost = 0; for (int i = 0; i < N - 1; ++i) { mincost += dis[i][i + * 1]; } mincost += dis[N - 1][0]; StdOut.println("inital mincost: " + * mincost); */mincost = Double.POSITIVE_INFINITY;//mincost = 26443;tour = new LinkedList<Integer>();TSPNode node = new TSPNode();branch(node);}private void branch(TSPNode node){if (node.size() == N){//assert node.length == node.bound() : "length and bound not equal";StdOut.println("old: " + mincost + ", new: " + node.length);if (node.length < mincost){mincost = node.length;tour = node.tour();}return;}if (node.next == null){return;}DirectedEdge next = node.next;TSPNode with = new TSPNode(node);with.include(next);TSPNode without = new TSPNode(node);without.exclude(next);if (!with.isValid()){if (without.isValid()){double withoutbnd = without.bound();if (withoutbnd < mincost){branch(without);}}} else{if (!without.isValid()){double withbnd = with.bound();if (withbnd < mincost){branch(with);}} else{double withbnd = with.bound(), withoutbnd = without.bound();if (withbnd > withoutbnd){if (withoutbnd < mincost){branch(without);}if (withbnd < mincost){branch(with);}} else{if (withbnd < mincost){branch(with);}if (withoutbnd < mincost){branch(without);}}}}}public void report(){StdOut.println("mincost: " + mincost + "\n");for (int i = 0; i < tour.size(); ++i){StdOut.print(tour.get(i));if (i != tour.size() - 1){StdOut.print(", ");} else{StdOut.println();}}}public static void main(String[] args){long start = System.nanoTime();TSPBranchAndBound tsp = new TSPBranchAndBound("tsp.txt");tsp.report();StdOut.println("time elapsed: " + ((double)System.nanoTime() - start) / 1000000000);}}

My implementation is here. It can be seen from the analysis that the lower bound judgment of this method is too broad, and there is no limit on the connection of these smallest edges. In addition, the calculation of the lower bound and the correctness judgment of the State are complicated, each State requires a lot of information to be recorded, at least including the contained and excluded edge, so the space complexity is also high. I used python to show the 25 cities and then manually specify a path. As the initial optimal solution, this greatly reduces the running time and space. However, it still takes one night (the correct result is displayed when you wake up the next day ).

import pylabimport sysdash_style = (    (0, 20, -15, 30, 10),    (1, 30, 0, 15, 10),    (0, 40, 15, 15, 10),    (1, 20, 30, 60, 10),    )fig = pylab.figure()ax = fig.add_subplot(111)with open(sys.argv[1], "r") as f:n = int(f.readline())for i in range(n):x, y = [float(x) for x in f.readline().split()]ax.plot(x, y, marker='o')(dd, dl, r, dr, dp) = dash_style[1 if i == 22 else (i % 2)]#print 'dashlen call', dlt = ax.text(x, y, str(i), withdash=True,               dashdirection=dd,               dashlength=dl,               rotation=r,               dashrotation=dr,               dashpush=dp,               )#ax.set_xlim((20000, 5.0))#ax.set_ylim((0.0, 5.0))pylab.show()

The best solution I have seen in the Forum is this. For a partial decomposition of {A, S, B}, that is, starting from a and passing through the points in set S, to B (set S includes a and B, the initial status is {, {A}, }). Use V to represent a collection of all cities. The lower bound of this decomposition is equal to the minimum distance from A to the V-S, plus the length of the Minimum Spanning Tree of the V-S, plus the sum of the minimum distance from the V-S to B. From the analysis, we can see that the information to be recorded in the lower bound of each state is very small (including only the current path), and the time complexity is very low (BFS + prim or Kruskal + BFS ). I did not implement it, but it is discussed that it only takes 2 seconds.

I once again lamented that the efficiency of algorithms is a huge difference!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.