## Efficient sinc interpolation of a list

I’m looking for an efficient way to construct a sinc interpolation i.e. an interpolating function of a list of numbers in the form of a Fourier series with a chosen maximum frequency. The input list is supposed to be an equidistant sample of some real smooth function (e.g. a function with a finite-support Fourier transform) and the interpolation should be a continuous function that is a sum of harmonic functions (cosine and sine functions) such that their coefficients are equal to the Fourier coefficients of the list for all frequencies up to the chosen max frequency, (see Wikipedia: Whittaker–Shannon interpolation formula). Increasing the max frequency beyond the band-limit of the original function should give a perfect reconstruction of it. I’d guess there is a predefined Mathematica function doing that but I couldn’t find it.

One way would be to apply Interpolation on the list, then NFourierSeries on the resulting InterpolationFunction (choosing the desired max frequency), but this is obviously very inefficient. LowpassFilter should be doing something similar but the output is a list, not a function. One option for an efficient solution that I’m trying to implement is to compute the Fourier coefficients from the list using Fourier, construct a function that is a list of harmonic functions with the right frequencies (e.g. Table[Sin[2 Pi n #/T], {n, nmax}] & for sines) and then Dot-multiply the two lists. I found this excellent post that computes the correct factors: Numerical Fourier transform of a complicated function

## Disk space efficient foreign key index for insert intensive scientific database?

I’m working tuning a scientific database whose associated simulation is very insert intensive (i.e., run for a long time inserting data, execute summary query at the end). One of the tables is starting to cause some problems since the table size is 235 GB with index sizes of 261 GB, and the server only has 800 GB so we would like to free up a bit of space.

Currently there is one foreign key reference (integer data) that is stored as a clustered b-true. This has been good for the summary queries, but likely isn’t helping the disk space issues.

Is there a more disk efficient way of storing this foreign key index? Would it make sense to switch over to a hash index instead of the b-tree index?

## What is an efficient way to get a look-at direction from either a quaternion or a transformation matrix?

So, I have an object in my custom engine (C++), with a column-major transform in world space. I’m using a package that takes a look-at direction as an input. What’s the most efficient way to get a look-at direction from this transform? Do I extract the rotation matrix? Do I try to extract a quaternion?

## Efficient algorithm for this combinatorial problem [closed]

$$\newcommand{\argmin}{\mathop{\mathrm{argmin}}\limits}$$

I am working on a combinatorial optimization problem and I need to figure out a way to solve the following equation. It naturally popped up in a method I chose to use in my assignment I was working on.

Given a fixed set $$\Theta$$ with each element $$\in (0,1)$$ and total $$N$$ elements ($$N$$ is about 25), I need to find a permutation of elements in $$\Theta$$ such that $$\vec K = \argmin_{\vec k = Permutation(\Theta)} \sum_{i=1}^N t_i D(\mu_i||k_i)$$ where $$\vec t, \vec \mu$$ are given vectors of length $$N$$ and $$D(p||q)$$ is the KL Divergence of the bernoulli distributions with parameters $$p$$ and $$q$$ respectively. Further, all the $$N$$ elements of $$\vec t$$ sum to 1 and $$\vec \mu$$ has all elements in $$[0,1]$$.

It is just impossible to go through all $$N!$$ permutations. A greedy type of algorithm which does not give exact $$\vec K$$ would also be acceptable to me if there is no other apparent method. Please let me know how to proceed!

## In theory, should neuromorphic computers be more efficient than traditional computers when performing logic?

There is this general sentiment about neuromorphic computers that they are simply "more efficient" than von Neuman.

I’ve heard a lot of talk about neuromorphic computers in the context of machine learning.
But, is there any research into performing logic and maths in general on such computers? How would one translate arithmetic, logic and algorithms and into "instructions" for a neuromorphic computer, if there are no logic structures in the hardware itself?

It is common to draw parallels with a brain in this context, so here’s one: Brains are great at recognising faces and such, but I don’t think I can do maths faster than an Arduino (and that thing doesn’t need much energy).

## Most efficient method for set intersection

Suppose I have two finite sets, $$A$$ and $$B$$, with arbitrarily large cardinalities, the ordered integral elements of which are determined by unique (and well defined) polynomial generating functions $$f:\mathbb{N}\rightarrow\mathbb{Z}$$ given by, say, $$f_1(x_i)$$ and $$f_2(x_j)$$, respectively. Assume, also, that $$A\cap B$$ is always a singleton set $$\{a\}$$ such that $$a=f_1(x_i)=f_2(x_j)$$ where I’ve proven that $$i\neq j$$.

Assuming you can even avoid the memory-dump problem, it seems the worst way to find $$\{a\}$$ is to generate both sets and then check for the intersection. I wrote a simple code in Sagemath that does this, and, as I suspected, it doesn’t work well for sets with even moderately large cardinalities.

Is there a better way to (program a computer to) find the intersection of two sets, or is it just as hopeless (from a time-complexity perspective) as trying to solve $$f_1(x_i)=f_2(x_j)$$ directly when the cardinalities are prohibitively large? Is there a parallel-computing possibility? If not, perhaps there’s a way to limit the atomistic search based on a range of values—i.e., each loop terminates the search after it finds the first $$i$$ value such that $$f_1(x_i)>f_2(x_j)$$, knowing that $$f_1(x_{i+1}), f_1(x_{i+2}), f_1(x_{i+3}), \cdots, f_1(x_{i+n})>f_1(x_i)>f_2(x_j)$$.

## An efficient way of calculating 𝜙(𝜙(p*q)) where p and q are prime

Let p and q be prime numbers and 𝜙 Euler’s totient function. Is there an efficient way of computing 𝜙(𝜙(p*q)) = 𝜙(𝜙((p-1)(q-1)), that is not simply based on factoring (p-1) and (q-1)?

Obviously, if p and q do not equal two, (p-1) and (q-1) are even and consequently their prime factorization is entirely different from the prime factorization of p and q. Therefore I assume that no such shortcut exists.

Do I overlook something?

## What is the most efficient way to turn a list of directory path strings into a tree?

I’m trying to find out the most efficient way of turning a list of path strings into a hierarchical list of hash maps tree using these rules:

• Node labels are delimited/split by ‘/’
• Hash maps have the structure:
{     label: "Node 0",     children: [] } 
• Node labels are also keys, so for example all nodes with the same label at the root level will be merged

So the following code:

[     "Node 0/Node 0-0",     "Node 0/Node 0-1",     "Node 1/Node 1-0/Node 1-0-0" ] 

Would turn into:

[     {         label: "Node 0",         children: [             {                 label: "Node 0-0",                 children: []             },             {                 label: "Node 0-1",                 children: []             },         ]     },     {         label: "Node 1",         children: [             {                 label: "Node 1-0",                 children: [                     {                         label: "Node 1-0-0",                         children: []                     },                 ]             },         ]     }, ] 

## Go-Back-N Protocol not efficient?

Let’s say we have five packets

p1 p2 p3 p4 p5

to be sent sequentially:

and for some reasons, p3 got delayed so it the the last packet to arrive recevier.

so below is the receiving order on the receiver’s end

p1 p2 p4 p5 p3

and according to the Go-Back-N Protocol, the receiver will only send acknowledge of p2 when it receive p5.

then the receiver receives p3 right after p5, and then it sends acknowledge of p3 to the sender.

But there will still be a timeout and the sender still has to re-send p4 and p5,even though the receiver did receive all packets, isn’t this Go-Back-N Protocol really inefficient?

## What is an efficient alternative to iptables for bulk usage? [migrated]

Suppose I have a list of thousands of ip addresses to block. Right now I know how to iterate through the list and for each one run:

iptables -A INPUT -s XX.XX.XX.XX -j DROP 

But this means I will have to run thousands of processes!

How can I do this more efficiently?