Given $n$ unique items and an $m^{th}$ normalised value, compute $m^{th}$ permutation without factorial expansion

We know that the number of permutations possible for $ n$ unique items is $ n!$ . We can uniquely label each permutation with a number from $ 0$ to $ (n!-1)$ .

Suppose if $ n=4$ , the possible permutations with their labels are,

0:  1234 1:  1243 2:  1324 3:  1342 4:  1432 5:  1423 6:  2134 7:  2143 8:  2314 9:  2341 10: 2431 11: 2413 12: 3214 13: 3241 14: 3124 15: 3142 16: 3412 17: 3421 18: 4231 19: 4213 20: 4321 21: 4312 22: 4132 23: 4123 

With any well defined labelling scheme, given a number $ m, 0 \leq m < n!$ , we can get back the permutation sequence. Further, these labels can be normalised to be between $ 0$ and $ 1$ . The above labels can be transformed into,

0:       1234 0.0434:  1243 0.0869:  1324 0.1304:  1342 0.1739:  1432 0.2173:  1423 0.2608:  2134 0.3043:  2143 0.3478:  2314 0.3913:  2341 0.4347:  2431 0.4782:  2413 0.5217:  3214 0.5652:  3241 0.6086:  3124 0.6521:  3142 0.6956:  3412 0.7391:  3421 0.7826:  4231 0.8260:  4213 0.8695:  4321 0.9130:  4312 0.9565:  4132 1:       4123 

Now, given $ n$ and $ m^{th}$ normalised label, can we get the $ m^{th}$ permutation while avoiding the expansion of $ n!$ ? For example, in the above set of permutations, if we were given the $ m^{th}$ normalised label to be $ 0.9$ , is it possible to get the closest sequence 4312 as the answer without computing $ 4!$ ?

Tarokka effect for a given die roll and critical success/miss. Is this balanced?

I’m now setting up my first session as DM. I don’t have the whole story yet, but I have a very simple first session that will introduce 3 players to the game (they are 4 in total).

I’m thinking of a few things to spice things up, based on their die rolls.

  • natural 20: critical success, the player gains advantage on its next attack
  • natural 1 : critical miss, the player misses and gains disadvantage on its next attack
  • natural 13: get a random effect from the tarokka deck

I may swap the 20 and 1 rolls by a player (or me) drawing a card from the luck deck, but not sure.

Does this sound like a good approach, or am I risking losing balance of the game?

Minimize the maximum inner product with vectors in a given set

Given a set $ S$ of non-negative unit vectors in $ \mathbb R_+^n$ , find a non-negative unit vector $ x$ such that the largest inner product of $ x$ and a vector $ v \in S$ is minimized. That is, $ $ \min_{x\in \mathbb R_+^n,\|x\|_2=1}\max_{v\in S} x^Tv. $ $

It seems like a quite fundamental problem in computational geometry. Has this problem been considered in the literature?

It can be formulated as an infinity norm minimization problem, which can in turn be expressed as a quadratically constrained LP. If the rows of matrix $ A$ are the vectors in $ S$ , we seek $ $ \begin{align} &&\min_x\|Ax\|_\infty \ \rm{s.t.} && x^Tx=1 \ && x\geq 0. \end{align} $ $ But the quadratic constraint is non-convex, so this is not very encouraging.

Fitting an integral function given a set of data points

I have a set of measures of the resistivity of a given material at different thicknesses and I’m trying to fit them using the Fuchs-Sondheimer model. My code is:

data = {{8.1, 60.166323}, {8.5, 47.01784}, {14, 52.534961}, {15,     50.4681111501753}, {20, 39.0704975714401}, {30,     29.7737879177201}, {45, 22.4406}, {50, 15.2659673601299}, {54,     18.189933218482}, {73, 14.8377093467966}, {100,     15.249523361101}, {137, 15.249523361101}, {170,     10.7190970441753}, {202, 15.249523361101}, {230, 10.9744085456615}}  G[d_, l_, p_] := NIntegrate[(y^(-3) - y^(-5)) (1 - Exp[-yd/l])/(1 - pExp[-yd/l]), {y,0.01, 1000}];  nlm  = NonlinearModelFit[data, 1/(1 - (3 l/(2 d)) G [d, l, p]) , {{l, 200}, {p, 4}}, d, Method -> NMinimize] 

However it returns me these errors:

NIntegrate::inumr: The integrand ((1-E^(-(yd/l))) (-(1/y^5)+1/y^3))/(1-pExp[-(yd/l)]) has evaluated to non-numerical values for all sampling points in the region with boundaries {{0.01,1000}}. 
NonLinearModelFit: the function value is not a real number at {l,p} = {200.,4.} 

I think that the problem is in the way in which I defined the integral function G[d, l, p], because I had to fit a different set of data points with a different function of only one variable which I defined through the NIntegrate function and it gave me no error. Could anyone please help me?

Given an algorithm, decide whether it runs in polynomial time?

This problem is not decidable (reducible to halting problem) but is semi-decidable and therefor verifiable (as those two definitions are equivalent: How to prove semi-decidable = verifiable?).

However, is this problem poly-time verifiable? A decision problem 𝑄 is in poly time verifiable iff

there is an algorithm 𝑉 called verifier such that V runs in $ O(x^{c})$ for some constant c for all inputs π‘₯,

if 𝑄(π‘₯)=π‘ŒπΈπ‘† then there is a string 𝑦 such that 𝑉(π‘₯,𝑦)=π‘ŒπΈπ‘†, if 𝑄(π‘₯)=𝑁𝑂 then for all strings 𝑦, 𝑉(π‘₯,𝑦)=𝑁𝑂.

Example: for an enumeration of P (such as this): How does an enumerator for machines for languages work? for each string $ p$ in the enumeration, does there exist some other string (certificate) $ c$ that allows you to verify $ p$ is a member of the enumeration in poly time?

Convert the given NFA to DFA

I am trying to find an DFA for the regular language given by the expression $ L\left( aa^{\ast }\left( a+b\right) \right)$ .

First simplifying $ L\left( aa^{\ast }\left( a+b\right) \right)$ we get

$ L\left( aa^{\ast }\left( a+b\right) \right)$ $ = L\left( a\right) L\left( a^{\ast }\right) L\left( a+b\right) $

Then I constructed an NFA for it , which is given below :

enter image description here

But I am not able to simplify the above NFA to a DFA as the state $ q_1$ has two $ \lambda$ transitions and I am not understanding how to deal with them .

How to estimate the complexity of sequential algorithm given that we know the complexity of each step?

First case: I was stumble upon a two step sequential algorithm where the big O complexity of each step is O(N^9).

Second case: Also if the algorithm have three steps where the complexity of step 1 is O(N^2), the complexity of step 2 is O(N^3) and the complexity of step 3 is O(N^9)

What would be the complexity of the first case and second case ?

Can an $NDTM$ simultaneously perform a set of operations on all strings of a given length?

Can an $ NDTM$ perform a set of operations on all strings of a given length $ b$ , at the same time? Aka can it operate on all strings of a given length by doing something like: spawn $ 2^b$ branches then operate on each string of length b on each branch?

How could it do this tho if the branches can’t communicate? That’s what I’m having a hard time with. How does any given branch, if it doesn’t know what strings the other branches are running, know what string to run the operations on (so that all the strings are covered by $ 2^b$ branches)?