Visit given city before a given cumulative distance in the traveling salesman problem

I would like to add an additional constraint to the traveling salesman problem: that a given city is visited within a given distance (say 100) from start. Is there a way to do this? My question is related to this unanswered CS question.

I have a mixed integer program using the R package CVXR that find the shortest route without subroutes (see below). The city order is represent in the vector node_order. The strategy I’ve pursued so far is:

1. Re-organize node_order so that the index is the order and the value is the city id
2. Look up the associated distances in distances
3. Compute a vector with the cumulative sum of these distances.
4. Add the constraint that city i must occur before the first index in (3) exceeding the distance constraint for that city.

The issues I’ve encountered with this approach is that I have not found a way to include finding-index-by-value into the optimization in CVXR. This is needed in both step (1) and (4) above. Maybe this is possible after all, or there is another approach? I am willing to use other packages than CVXR and other software than R.

My current program:

library(CVXR)  # Make distances N = 10 distances = matrix(1:(N*N), ncol = N)  # Flag 1 iff we travel that path. 0 otherwise do_transition = Variable(N, N, boolean = TRUE)  # Minimize the total duration of the traveled paths. objective = Minimize(sum(do_transition * distances))  # Only go one tour. Order is 1:(N-1) node_order = Variable(N-1, integer = TRUE) ii = t(node_order %*% matrix(1, ncol = N - 1))  # repeat as N rows jj = node_order %*% matrix(1, ncol = N - 1)  # repeat as N cols   # Constraints constraints = list(   do_transition * diag(N) == 0,  # Disallow transitions to self (diagonal elements)   sum_entries(do_transition, 1) == rep(1, N),  # Exactly one entrance to each node   sum_entries(do_transition, 2) == rep(1, N),  # Exactly one exit from each node   (jj - ii) + N * do_transition[2:N, 2:N] <= N - 1,  # One tour constraint (no subtours)   node_order >= 1,   # This interval represents order as ranks (1 to N-1)   node_order <= N-1 )  # Find optimum solution = solve(Problem(objective, constraints)) 

A bit of code pertaining to my current (unsuccessful) attempts:

# Get tour order #tour = order(c(NA, result$$getValue(node_order))) # R solution tour = rep(NA, N-1) tour[result$$getValue(node_order)] = 2:N  # Get tour distances distances_optim = diag(distances[tour, tour[2:N]])  # Tour cumulative distances distances_cumul = cumsum_axis(distances_optim) 

For travelling salesman problem, can someone explain the christofides algorithm in a simple way.Why is it so fast?

Google says it finds up to 1.5*optimal solution. But all the concepts like minimum spanning tree,minimum weight matching etc are hard for me to connect. Maybe some real life example or diagram would make things clear.

Need help with an interesting variant of the travelling salesman problem

I’m working on an assignment in my CS class and the gist of the problem is as follows.

A salesman has a map of some apartments (over 300 blocks). I am given the (x,y) coordinates of each block as well as the “money” he will earn by visiting each block. I need to find the shortest route for the salesman to take such that he will earn x amount of money. He does not have to visit all the blocks. At the end of the day he will have to return to the origin (0,0).

I used a greedy algorithm by finding the shortest possible path he can take at each step. E.g from the origin I find the block with the lowest euclidean distance from the origin. Lets say this block is (2,2). I then find the block with the lowest euclidean distance from (2,2) until I have x amount of money. Using this greedy algorithm I then performed a 2-opt local search to improve my solution further.

The problem lies here though: when I perform a 3-opt local search using the implementation from wikipedia (https://en.wikipedia.org/wiki/3-opt), I get a much worse result than either the greedy or the 2-opt. Is there something wrong with the wiki code and if not, what did I do wrong? Thanks.

Solving a modified Travelling Salesman Problem(TSP)

I am trying to solve a modified version of the TSP. In my version, multiple visits to a city are allowed, as long as the path is the shortest, and also, only subset of the cities are compulsory to visit, as in, you can go through other cities to visit all the subset cities if path is shorter, but if not, the other cities can be ignored. For simplicity, starting city is fixed. I know approx. solutions for the traditional TSP, but, I have trouble solving this one. A naive approach can be to find all possible combinations of the subset cities and check which has the shortest total path length, but that solution will have a n^2 complexity for trying each combination, plus the complexity for finding shortest path for each two cities. So, what should I use to solve this problem?

What is the difference between nearest and cheapest insertion algorithms for a Traveling salesman problem?

I know that in the cheapest insertion algorithm we include the node which is not in the “base group” that has smaller cost given all possible combinations, and for the nearest we include the node with smaller cost. So, do they differ only in how combinations are made?

For example, I have the following weighted matrix graph:

     2  13  14  17  20 2  0.0 Inf Inf 1.9 1.7 13 Inf 0.0 7.3 7.4 7.2 14 Inf 7.3 0.0 7.7 7.8 17 1.9 7.4 7.7 0.0 9.2 20 1.7 7.2 7.8 9.2 0.0 

If I start from node 2 from each method:

Nearest

1) 2-20-2

2.1) 2-17-20-2 = 12,8

2.2) 2-17-20-2 = 12,8 *Choosen

3.1) 2-13-20-17-2 = Inf

3.2) 2-20-13-17-2 = 18,2 *Choosen

3.3) 2-20-17-13-2 = Inf

4.1) 2-14-20-13-17-2 = Inf

4.2) 2-20-14-13-17-2 = 26,1

4.3) 2-20-13-14-17-2 = 25,8 *Choosen one

4.4) 2-20-13-17-14-2 = Inf

Cheapest

1) 2-20-2 2.1.a) 2-13-20-2 = Inf

2.1.b) 2-20-13-2 = Inf

2.2.a) 2-14-20-2 = Inf

2.2.b) 2-20-14-2 = Inf

2.3.a) 2-17-20-2 = 12,8

2.3.b) 2-20-17-2 = 12,8

So, with the cheapest approach, do I explicitly make all combinations?

For what applications of the traveling salesman problem, does visiting each city at most once truely matter?

Traditionally, the traveling salesman problem has you visit a city at least once and at most once.

However, if you were an actual traveling salesman, you would want the least cost route to visit each city at least once, and you wouldn’t be bothered visiting a city 2, 3, or more times. For given city, you might stop and hawk your wares only once, and on subsequent visits, only drive through the city without stopping.

Consider an undirected graph having a city incident to exactly two edges. The cost on one of these edges is only 10 units, while the cost on the other is 99,999,999,999. If you insist on visiting each city at most once, then you are forced to incur the cost of the high cost edge. However, if you allow yourself to visit cities multiple times, then you simply leave the way you came in (on the low cost edge). The low cost edge leads you back to a city you’ve passed through before.

The traveling salesman problem is highly contrived for an actual traveling salesman. I want to give students an application for which there’s a real incentive to visit each city at most once. For what applications is visiting each city a critical aspect of the problem?

can i calculate time of finding optimum of travel salesman problem with super computer and can i know that is limit of cities to now computers?

i want to know the real limit of our computational power that we have now what is the limit of cities that i can reach with optimum sol. i believe that first computer is 10^19 process in second

and can i calculate the time that it will take by looking to connections of directed graph ? and tell time by this power 10^19

as i can edit real graph of problem and delete cities of it

death of a salesman essays

langston hughes essay louis riel essay equal rights essay essay on world war 1 sample critical essay thesis statement descriptive essay gun control essay conclusion nick vujicic essay
reflective essay on group work

death of a salesman essays

Is Travelling salesman problem with dikstra’s algorithm the optimal solution?

I’ve tried solving the travelling salesman problem using dynamic programming and Dijkstra’s algorithm to find the optimal solution in Dijkstra’s algorithm is always correct(in the test data i used). Question is: is there anyone who has found a better way to solve the problem?

Traveling Salesman Problem with profit and time limit as ILP formulation

How to formulate the following problem?

The salesman gains a profit $$p_{i}$$ when visiting a city i, trip between city i and city j costs $$c_{ij}$$ and takes $$t_{ij}$$ time. The trip must not exceed a time limit T. The difference between profit and costs must be maximized.

I’ve formulated this problem as follows:

maximize $$\sum _{i=1}^{n} \sum _{j=1}^{n}p_{ij}x_{ij}- \sum _{i=1}^{n} \sum _{j=1}^{n}c_{ij}x_{ij}$$

can i formulate trip time as $$\sum _{i=1}^{n} \sum _{j=1}^{n}t_{ij}x_{ij}$$ and add the constraint $$\sum _{i=1}^{n} \sum _{j=1}^{n}t_{ij}x_{ij} \leqslant T$$?

I’ve read also about the Time Dependent TSP (TDTSP), but i’m not sure it’s my case…