What is the name of this sorting algorithm?

I have typeset in a Wikipedia draft this sorting algorithm. It is a comparison sort, yet can handle only numeric arrays.

Basically, it marches through the input array and for each new array component it inserts it into a sorted doubly-linked list. It has two optimizations:

  1. each list node has a count; instead of adding a new node with the same key, existing node is reused by incrementing its counter,
  2. the sort maintains a reference to the most recently accessed list node. This allows faster search for the next list node in case the two relevant keys are “close” to each other.

My question is: what is the name of that sort? (I believe I am not the first one to conceive it.)

Algorithm to compute sum of all unique edge pairs of a tree

Given tree is undirected graph. It has n vertices and n-1 edges. The algorithm should compute the sum of all edge pairs. Thus, there are total nC2 or n(n-1)/2 such pairs. The time complexity of the mentioned algorithm is n(n-1)/2. Please suggest an algorithm with better space and time complexity if possible. Below is the java implementation.

import java.util.*;  public class AllPairSumTree {      static long sumAllPairs = 0;      public static void main(String[] args) {         /*          * Total Number of vertices          */         int N = 7;         /*          * Adjacency List          */         LinkedList<Integer>[] adjacencyList = new LinkedList[N];         /*          * Initialize Adjacency List          */         for(int ii=0; ii<N;ii++) {             adjacencyList[ii] = new LinkedList<Integer>();         }         /*          * Weighted Graph Matrix          */         int[][] weightedGraph = new int[N][N];         /*          * Initialize Matrix          */         for(int ii=0;ii<N;ii++) {             for(int jj=0;jj<N;jj++) {                 if(ii == jj) {                     weightedGraph[ii][jj] = 0;                 }else {                     weightedGraph[ii][jj] = Integer.MAX_VALUE;                 }             }         }          /*          * Input Pattern: vertex1,  vertex2, cost          *           * Total Vertex: N, Total Edges: N-1 (Tree, Undirected Graph)          */         int[] inputGraph = { 1, 2, 1,                                 2, 3, 2,                                 3, 4, 3,                                 3, 5, 4,                                 5, 6, 6,                                 5, 7, 5};          /*          * Assign  Adjacency List  and Matrix with input Graph          */         for(int ii=0; ii<N-1; ii++) {             int vertex1 = inputGraph[ii*3 + 0] - 1;             int vertex2 = inputGraph[ii*3 + 1] - 1;             int cost = inputGraph[ii*3 + 2];              adjacencyList[vertex1].add(vertex2);             adjacencyList[vertex2].add(vertex1);    //bidirectional edge              weightedGraph[vertex1][vertex2] = cost;             weightedGraph[vertex2][vertex1] = cost; //bidirectional edge         }          sumAllPairs = 0;         int currentVertex = 0;         LinkedHashSet<Integer> visitedSet = new LinkedHashSet<Integer>(N);         int lastVisitedVertex = -1;         allPairSum(weightedGraph, adjacencyList, currentVertex, visitedSet, lastVisitedVertex);          System.out.println(sumAllPairs);     }      /*      * Say graph has vertices 1,2,3,4,5,6,7      *       * allPairSum() will compute sum of edges like this:      *              21 + (31+32) + (41+42+43) + (51+52+53+54) + (61+62+63+64+65)+(71+72+73+74+75+76)      *              where ij represents edge from vertex i to vertex j      *       * Time Complexity:      *      N(N-1)/2 or Combination(N,2)[![enter image description here][1]][1]      */     private static void allPairSum(int[][] weightedGraph, LinkedList<Integer>[] adjacencyList, int currentVertex, LinkedHashSet<Integer> visitedSet, int lastVisitedVertex) {          for(Integer visitedVer : visitedSet) {             int cost = weightedGraph[visitedVer][lastVisitedVertex] +  weightedGraph[lastVisitedVertex][currentVertex];             sumAllPairs += cost;             weightedGraph[visitedVer][currentVertex] = cost;             weightedGraph[currentVertex][visitedVer] = cost;         }          visitedSet.add(currentVertex);          for(Integer neighbourVert : adjacencyList[currentVertex]) {             if(neighbourVert != lastVisitedVertex) {                 /*                  *      neighbourVert becomes currentVertex                  *      currentVertex becomes lastVisitedVertex                  */                 allPairSum(weightedGraph, adjacencyList, neighbourVert, visitedSet, currentVertex);             }         }     } } 

Sample tree with 7 vertices and 6 edges

Is there an algorithm to compute a Belyi map for the Riemann surface?

Let $ y^2=x^5-x-1$ be an affine model of a projective complex curve, is there an algorithm to compute the Belyi map (preferably of small degree), i.e., map to the projective line ramified only at $ \{0,1,\infty\}$ .

In my attempts to do this by hand, I get ramification at 4 points and, subsequently, using Shabat polynomial will skyrocket the degree of the map. Is there a way to avoid increasing the degree to thousands or hundreds? I have $ \beta=h\circ g\ \circ f$ , where $ f$ is projection on $ x$ , $ g=x^5-x-1$ and $ h=\frac{( 12500 x + 18750 x^2 + 12500 x^3 + 3125 x^4)}{-2869}$ . $ \beta$ gives ramification at four points $ \{0,1, \infty, \frac{3125}{2869}\}$ . Also, it is possible to use $ g=x^5-x$ with different $ h$ , but the Shabat polynomial will still be of very big degree….

Why does the BFR (Bradley, Fayyad and Reina) algorithm assume clusters to be normally distributed around its centroid?

I’m following a course on data mining based on the lectures from Stanford University and the book Mining of massive datasets.

On the topic of clustering, the BFR algorithm is explained with this video.
I understand how the algorithm works, but I am unclear on the reason why the algorithm makes the strong assumption that each cluster is normally distributed around a centroid in Euclidian space.

The video explains that the assumption implies that clusters look like axis-aligned ellipses, which is understandable as the dimensions must be independent.
I’ve watched the video a few times, and read the section in the book (freely downloadable using the first link) on pages 257-259, but I’m unable to grasp why that assumption is made, and why it has to be made.

Could someone explain this for me?

How MO’S algorithm with update query works?

MO’S algorithm is used to answer queries related to a given array by dividing it into blocks(https://blog.anudeep2011.com/mos-algorithm/).Though i understood general MO’S algorithm but i am facing difficulty to understand MO’S algorithm with updates (https://www.youtube.com/watch?v=gUpfwVRXhNY)

Since articles related to MO’S algorithm with updates is rare can someone who knows this algorithm can explain it.

Proof of lemma from Hong’s article about multi-threaded max flow algorithm

I’m struggling to prove Lemma 3 and Lemma 4 from an article about parallel version of push-relabel algorithm: A lock-free multi-threaded algorithm for the maximum flow problem.

Lemma 3. Any trace of two push and/or lift operations is equivalent to either a stage-clean trace or a stage-stepping trace.


Lemma 4. For any trace of three or more push and/or lift operations, there exists an equivalent trace consisting of a sequence of non-overlapping traces, each of which is either stage-clean or stage-stepping.

Pdf version of the article can be found here

Algorithm for automatic construction of natural deduction proofs

I was wondering if there exists any algorithm for automatic construction of nautral deduction proofs. I’m interested in propositional logic and first order logic.

If there is no algoritm, can you provide some proof of this fact?

PD0: I’m not interested in any page for solving these kind of problems. My question is more theorical.

PD1: This is not homework, just personal interest.

Can some one explain how MO’S algorithm with update query works?

MO’S algorithm is used to answer queries related to a given array by dividing it into blocks(https://blog.anudeep2011.com/mos-algorithm/).Though i understood general MO’S algorithm but i am facing difficulty to understand MO’S algorithm with updates (https://www.youtube.com/watch?v=gUpfwVRXhNY)

Since articles related to MO’S algorithm with updates is rare can someone who knows it explain it.

How can I write a genetic programming algorithm, given that the Halting problem is unsolvable?

I am learning genetic programming and to practice I want to write a simple algorithm which evolves a program that solves a simple function (say, square root). I intend to represent programs as abstract syntax trees.

However, one of the functors is the while loop. Of course, in assesting a tree’s fitness, I have to evaluate the program: but the halting problem is unsolvable. How can I tell if a given tree stops? Of course I can’t, so what are some practicals ways to approach this problem?

Should I make my simple tree-language not turing complete? Or maybe give a timeout to each tree?