Artificial ant colony algorithm for graph

let’s assume we have ant at node $ 1$ and she has $ \{2,3,4\}$ vertices. How do I compute which one she choose? I mean if the formula $ p_{ij}(k)$ is the probability of $ k$ -th ant at node $ i$ choose $ j$ node, for example, do I take $ j=2$ , compute the probability and if it satisfies $ random(0,1)<p_{12}(k)$ , the ant choose $ j=2$ , if not move to next vertex $ j=3$ until the inequality is satisfied? The next question is, can you recommend me the sources, where I can find conversion from this algorithm graph version to optimizing function version. Thank you.

“Loneliest point” algorithm


I’m looking for an algorithm to find the “loneliest point” in a set S relative to another set R.

Specifically, given a set of points S in n-dimensional space, let R be the minimal bounding box of S (so the product of the bounds across all dimensions).

I want to find the minimal distance d such that dist(r, s) < d for any rR, sS. I call the point in R that is maximal distance from S the loneliest point in R, but I’m not interested in finding this point, just the distance.

A solution:

I have an exhaustive solution, as follows: Construct Voronoi diagrams for S in R, find the distance from each point s to the furthest vertex in its Voronoi polyhedron, and take the max of all these.

Why this is inadequate:

This works fine for low-dimensional data, but it does not scale well to bigger data sets. I’m sure there must be a better, approximate solution. I don’t need exact values, I just need an estimate to some predefined degree of accuracy.

Use case:

The use case here is that I am trying to fix an upper bound for the Euclidean distance that a point can be from a given data set in a design space, so that I can have a constistent scale across the space.


Is anyone aware of such an algorithm? It seems like it should be an established problem but I haven’t so far found an easy solution. Feels like it should be related to nearest-neighbour somehow.


What is the run time of this algorithm, written in pseudocode?

count = 0 for i = 1 to n:     for j = 1 to i:         count += 1 

So from my understanding, we can break this up into 2 summations, by nesting the $ j$ loop as a summation into the $ i$ loop as a summation as follows:

$ \sum\limits_{i=1}^{n} \sum\limits_{j=1}^{i}$ 1

Since incrementing count is 1 operation O(1).

Then, we can manipulate the above summation to:

= $ \sum\limits_{i=1}^{n} (i – 1 + 1)$ using property of summations

= $ \sum\limits_{i=1}^{n} i$

= $ \frac{n(n+1)}{2}$ = $ O(n^2)$

Is this the correct approach?

Why is my algorithm version so slow with this input?

I’ve written my F# code to solve Advent of Code Day 18 Part 1. While it seems to work fine with other simple demo inputs, it stuck with the following input

################# #i.G..c...e..H.p# ########.######## #j.A..b...f..D.o# ########@######## #k.E..a...g..B.n# ########.######## #l.F..d...h..C.m# ################# 

Tehre is reference solution in Python which is correct and quick, but I fail to see where is the fundamental difference between the two algorithms, behind the other minor code differences (also because the languages are different).

I’ve tried with concurrency vs a queue, with a tree vs a grid/map (see my github history) but with no luck until now.

The principal part of the code is described below. It should fall under a breadth first search (BFS) algorithm.

Here is how the single step by which I elaborate the solution.

let next (step:Step) (i:int) (solution: Solution) (fullGrid: Map<char,Map<char,Grid>>) : Solution =     let branches = solution.tree.branches     let distance = solution.tree.distance + branches.[i].distance     let area = branches.[i].area       //let newbranches, back, keys =     match step with     | SpaceStep ->         failwith "not expected with smart grid"     | KeyStep ->         let keys = area :: solution.keys         let grid = fullGrid.[area]         let tree = grid2tree area distance keys grid         {keys=keys; tree=tree} 

The fullGrid is supposed to contain the matrix of distances. The wrapping solver is simply a recursion or queue based version of the BFS.

let findSolution (keynum:int) (solution: Solution) (fullGrid: Map<char,Map<char,Grid>>) : Solution option =     let mutable solution_queue : queue<Solution> = MyQueue.empty     solution_queue <- enqueue solution_queue solution     let mutable mindistance : int option = None     let mutable alternatives : Solution list = List.empty      while (MyQueue.length solution_queue > 0) do         let solution = dequeue &solution_queue         let solution = {solution with tree = grid2tree solution.tree.area solution.tree.distance solution.keys fullGrid.[solution.tree.area]}         let branches = solution.tree.branches         if  (branches = [||] ) then              if solution.keys.Length = keynum              then updateMin &mindistance &alternatives solution         else         match mindistance with         | Some d when d < solution.tree.distance + (solution.tree.branches |> (fun t -> t.distance) |> Array.min) -> ()          | _ ->         let indexes =             [|0..branches.Length-1|]             |> Array.sortBy(fun idx -> ((if isKey branches.[idx].area then 0 else 1) , branches.[idx].distance))         for i in indexes do             if branches.[i].area = '#' then                  failwith "not expected with smart grid"              else             if branches.[i].area = Space then                 failwith "not expected with smart grid"             else             if (Char.IsLower branches.[i].area) then                 let solutionNext = next KeyStep i solution fullGrid                 if solutionNext.keys.Length = keynum                 then  updateMin &mindistance &alternatives solutionNext                 else                 solution_queue <- enqueue solution_queue solutionNext             else             if (Char.IsUpper branches.[i].area) then                 failwith "not expected with smart grid"      match alternatives with     | [] -> None     | alternatives ->         alternatives |> List.minBy(fun a -> a.tree.distance) |> Some 

Learning algorithm analysis

Im learning order of algorithm

For x>=2, and rand(x) is function that return 1 value from 1 to x-1 which have uniform probability $ \frac{1}{x-1 }$ And max(x,y) output bigger value and min(x,y) output smaller value I need to find the worst case complexity of each algorithm

Algorithm A Input=n X=n WHILE X>=2  y=rand(x) X=max(y,x-y)  ALGORITHM B Input =n X=n While x>=2 Y=rand(x) X=min(y,x-y)  ALGORITHM C void fn(x :int) {if(x>=2)   Y=rand(x)    Fn(y)    Fn(x-y)  Else  Return  } Fn(n) 

For algorithm A , when x=10, suppose rand() return 1 , then max(1,x-1) so worst case will be O(n)

For algorithm B, when x =10 , suppose rand() return 4 from 10 , it will call min(4,10-4) , will call again x=4 etc but worst case will be x/2 or (x-1)/2

For algorithm C It is recursive , for example if x =10 and y=rand(x)=4 , when it call Fn(4) and Fn(10-4=6) , it will cal again and again until smaller than 2 But how can i find the worst case?

Optimality of a Greedy Algorithm

If you designed a greedy algorithm to obtain an optimal solution and the algorithm can produce different combinations of values but still, any of theses combination is an optimal solution. How you prove it is optimality?

For example you have a set of numbers $ \mathcal{M}=\{1,2,3,4\}$ and you want to design an algorithm to obtain the minimum number of integers required to obtain a sum 5. In this case, $ 1,4$ or $ 2,3$ can produce 5 and both are optimal solutions as the minimum number required is two.

How to prove the optimality of the algorithm ?

I tried by contradiction an assume that there is an optimal solution $ P^*$ and my algorithm doesnot produce an optimal solution $ P$ so $ P \neq P^*$ . but I donot know how to continue the argument.

Greedy Algorithm on Knockout Tournaments: Proof of Correctness

The original problem is from ICPC NWERC 2017 (problem K). Here we post a simplified version.

You are given an array $ rk[1\ldots 2^k]$ of positive integers representing the ranks of players $ 1\ldots2^k$ . The tournament evolves in a random way, so that when player $ i$ faces player $ j$ , he wins with probability $ \frac{rk[i]}{rk[i] + rk[j]}$ . The losing player is knocked out from the tournament, thus in $ k$ rounds the tournament is over.

You are requested to arrange the players into a knockout tournament starting line-up (i.e. on the leaves of a complete binary tree of height $ k$ ) in the way that maximizes the probability that player $ 1$ wins the tournament.

The official greedy solution, presented without any trace of proof, is the following:

  • Sort $ rk[2\ldots2^k]$ in ascending order.
  • Arrange players so that $ rk[1\ldots2^{k – 1}]$ and $ rk[2 ^{k – 1} + 1 \ldots2^{k}]$ are arranged in the two root-based sub-trees and recurse on the sub-trees until you get a singleton.
  • For each node $ v$ of the tree compute (bottom-up) the probability vector $ P_v$ such that $ P_v[x]$ is the probability of player $ x$ winning the sub-tournament rooted at $ v$ .

This algorithm runs in $ \mathcal{O}(n^2 \log(n))$ .

How to prove that the algorithm is also correct?

Genetic algorithm pressure using only selection

Suppose you have a population of N individuals with fitness 1, 2, . . . , N (i.e., all individuals have a unique fitness value). Suppose you repeatedly apply tournament selection without replacement with tournament size s = 2 to this population, without doing crossover, mutation, and replacement. In other words, you run a genetic algorithm with selection alone.

After a certain number of generations you will end up with a population consisting of N copies of the same individual. Can you give an estimate of the number of generations needed to achieve that?

Sorting algorithm

enter image description here

I want to know from this picture which one is bubble sort, insertion sort and selection sort? My answer is a is bubble sort , b is insertion sort, c is selection sort Is this right? Is there reference link too to picture which one is the right sort?

Algorithm for finding an irreducible kernel of a DAG in O(V*e) time, where e is number of edges in output

An irreducible kernel is the term used in Handbook of Theoretical Computer Science (HTCS), Volume A “Algorithms and Complexity” in the chapter on graph algorithms. Given a directed graph G=(V,E), an irreducible kernel is a graph G’=(V,E’) where E’ is a subset of E, and both G and G’ have the same reachability (i.e. their transitive closures are the same), and removing any edge from E’ would not satisfy this condition, i.e. E’ is minimal (although not necessarily the minimum size possible).

A minimum equivalent graph is similar, except it also has the fewest number of edges among all such graphs. Both of these concepts are similar to a transitive reduction, but not the same because a transitive reduction is allowed to have edges that are not in E.

HTCS says that there is an algorithm to calculate an irreducible kernel of a directed acyclic graph in time O(V*e) time, where V is the number of vertices, and e is the number of edges in the irreducible kernel, i.e. the output of the algorithm. The reference given for this is the following paper, which I have not been able to find an on line source for yet (links or other sources welcome — I can ask at a research library soon if nothing turns up).

Noltemeier, H., “Reduction of directed graphs to irreducible kenrels”, Discussion paper 7505, Lehrstuhl Mathematische Verfahrenforschung (Operations Research) und Datenverarbeitung, Univ. Gottingen, Gottingen, 1975.

Does anyone know what this algorithm is? It surprises me a little that it includes the number of edges in the output graph, since that would mean it should run in O(n^2) time given an input graph with O(n^2) edges that represents a total order, e.g. all nodes are assigned integers from 1 up to n, and there is an edge from node i to j if i < j. That doesn’t seem impossible, mind you, simply surprising.