Efficient algorithm for this combinatorial problem [closed]

$ \newcommand{\argmin}{\mathop{\mathrm{argmin}}\limits}$

I am working on a combinatorial optimization problem and I need to figure out a way to solve the following equation. It naturally popped up in a method I chose to use in my assignment I was working on.

Given a fixed set $ \Theta$ with each element $ \in (0,1)$ and total $ N$ elements ($ N$ is about 25), I need to find a permutation of elements in $ \Theta$ such that $ $ \vec K = \argmin_{\vec k = Permutation(\Theta)} \sum_{i=1}^N t_i D(\mu_i||k_i) $ $ where $ \vec t, \vec \mu$ are given vectors of length $ N$ and $ D(p||q)$ is the KL Divergence of the bernoulli distributions with parameters $ p$ and $ q$ respectively. Further, all the $ N$ elements of $ \vec t$ sum to 1 and $ \vec \mu$ has all elements in $ [0,1]$ .

It is just impossible to go through all $ N!$ permutations. A greedy type of algorithm which does not give exact $ \vec K$ would also be acceptable to me if there is no other apparent method. Please let me know how to proceed!

In theory, should neuromorphic computers be more efficient than traditional computers when performing logic?

There is this general sentiment about neuromorphic computers that they are simply "more efficient" than von Neuman.

I’ve heard a lot of talk about neuromorphic computers in the context of machine learning.
But, is there any research into performing logic and maths in general on such computers? How would one translate arithmetic, logic and algorithms and into "instructions" for a neuromorphic computer, if there are no logic structures in the hardware itself?

It is common to draw parallels with a brain in this context, so here’s one: Brains are great at recognising faces and such, but I don’t think I can do maths faster than an Arduino (and that thing doesn’t need much energy).

Most efficient method for set intersection

Suppose I have two finite sets, $ A$ and $ B$ , with arbitrarily large cardinalities, the ordered integral elements of which are determined by unique (and well defined) polynomial generating functions $ f:\mathbb{N}\rightarrow\mathbb{Z}$ given by, say, $ f_1(x_i)$ and $ f_2(x_j)$ , respectively. Assume, also, that $ A\cap B$ is always a singleton set $ \{a\}$ such that $ a=f_1(x_i)=f_2(x_j)$ where I’ve proven that $ i\neq j$ .

Assuming you can even avoid the memory-dump problem, it seems the worst way to find $ \{a\}$ is to generate both sets and then check for the intersection. I wrote a simple code in Sagemath that does this, and, as I suspected, it doesn’t work well for sets with even moderately large cardinalities.

Is there a better way to (program a computer to) find the intersection of two sets, or is it just as hopeless (from a time-complexity perspective) as trying to solve $ f_1(x_i)=f_2(x_j)$ directly when the cardinalities are prohibitively large? Is there a parallel-computing possibility? If not, perhaps there’s a way to limit the atomistic search based on a range of valuesβ€”i.e., each loop terminates the search after it finds the first $ i$ value such that $ f_1(x_i)>f_2(x_j)$ , knowing that $ f_1(x_{i+1}), f_1(x_{i+2}), f_1(x_{i+3}), \cdots, f_1(x_{i+n})>f_1(x_i)>f_2(x_j)$ .

An efficient way of calculating πœ™(πœ™(p*q)) where p and q are prime

Let p and q be prime numbers and πœ™ Euler’s totient function. Is there an efficient way of computing πœ™(πœ™(p*q)) = πœ™(πœ™((p-1)(q-1)), that is not simply based on factoring (p-1) and (q-1)?

Obviously, if p and q do not equal two, (p-1) and (q-1) are even and consequently their prime factorization is entirely different from the prime factorization of p and q. Therefore I assume that no such shortcut exists.

Do I overlook something?

What is the most efficient way to turn a list of directory path strings into a tree?

I’m trying to find out the most efficient way of turning a list of path strings into a hierarchical list of hash maps tree using these rules:

  • Node labels are delimited/split by ‘/’
  • Hash maps have the structure:
{     label: "Node 0",     children: [] } 
  • Node labels are also keys, so for example all nodes with the same label at the root level will be merged

So the following code:

[     "Node 0/Node 0-0",     "Node 0/Node 0-1",     "Node 1/Node 1-0/Node 1-0-0" ] 

Would turn into:

[     {         label: "Node 0",         children: [             {                 label: "Node 0-0",                 children: []             },             {                 label: "Node 0-1",                 children: []             },         ]     },     {         label: "Node 1",         children: [             {                 label: "Node 1-0",                 children: [                     {                         label: "Node 1-0-0",                         children: []                     },                 ]             },         ]     }, ] 

Go-Back-N Protocol not efficient?

Let’s say we have five packets

p1 p2 p3 p4 p5

to be sent sequentially:

and for some reasons, p3 got delayed so it the the last packet to arrive recevier.

so below is the receiving order on the receiver’s end

p1 p2 p4 p5 p3

and according to the Go-Back-N Protocol, the receiver will only send acknowledge of p2 when it receive p5.

then the receiver receives p3 right after p5, and then it sends acknowledge of p3 to the sender.

But there will still be a timeout and the sender still has to re-send p4 and p5,even though the receiver did receive all packets, isn’t this Go-Back-N Protocol really inefficient?

Data structure for efficient group lookup

I need a data structure, which allows efficient queries for ‘give me the group of x‘.

Let me give you an example:

Group 1: [a, b, c]  Group 2: [d, e] Group 3: [f]  getGroupOf(d) -> [d, e] 

There are no significant constraints on storage or construction time. I only need getGroupOf to be O(logn) or faster.

I am thinking about using a Dictionary<Element, Set<Element>> where entries for all elements in a group share the same set reference. This would make lookup effectively O(1) or O(logn) depending on the dictionary implementation, but would result in a lot of entries.

This feels fairly bloated, and I am wondering: is there is a more elegant data structure to accomplish this?