Greedy Probabilistic Algorithm for Exact Three Cover

I have a probabilistic greedy algorithm for Exact Three Cover. I doubt it’ll work on all inputs in polytime. Because the algorithm does not run $ 2^n$ time. I will assume that it works for some but not all inputs.

Our inputs are $ S$ and $ B$

$ S$ is just a set of integers

$ B$ is a list of 3-element {}s

Algorithm

  1. Input validation functions are used to ensure that sets of 3 are $ elements$ $ S$ .

  2. A simple $ if~~statement$ makes sure that $ |S|$ % $ 3$ = $ 0$

  3. I treat the sets like lists in my algorithm. So, I will sort all my sets from smallest to largest magnitudes (eg {3,2,1} now will be sorted to {1,2,3}

  4. I will also sort my list of sets called $ B$ in an ordering where I can find all {1,2,x}s with all other {1,2,x}s. (eg, sorted list {1,2,3},{1,2,4},{4,5,6},{4,5,9},{7,6,5} )

  5. I will also genereate a new list of sets containing elements where a {1,2,x} only occurs one time in $ B$ .

  6. Use brute force on small inputs and on both sides of the list $ B$ up to $ |S|$ / $ 3$ * $ 2$ sets. (eg. use brute force to check for exact covers on left and right side of the list B[0:length(s)//3*2] and reversed B[0:length(s)//3*2])

Seed the PRNG with a Quantum Random Number Generator

for a in range(0, length(B)):     o = quantumrandom.randint(0, length(B))     random.seed(int(o))  # I will create a function to shuffle B later  def shuff(B, n):     for i in range(n-1,0,-1):         random.seed()         j = random.randint(0,i+1)         B[i],B[j] = B[j],B[i] 

Define the number of times while loop will run

n = length(s)  # This is a large constant. No instances # are impractical to solve.  while_loop_steps = n*241*((n*241)-1)*((n*241)-2)//6 

While loop

stop = 0 Potential_solution = [] opps = 0 failed_lists = 0 ss = s  while stop <= while_loop_steps:      opps = opps + 1     stop = stop + 1      shuff(B,length(B))          if length(Potential_solution) == length(ss)//3:         # break if Exact         # three cover is         # found.         OUTPUT YES         failed_lists = failed_lists + 1         HALT      # opps helps     # me see     # if I miss a correct     # list           if opps > length(B):         if failed_lists < 1:             s = set()             opps = 0               # Keep B[0]     # and append to     # end of list     # del B[0]     # to push >>     # in list.       B.append(B[0])     del [B[0]]     Potential_solution = []     s = set()          for l in B:         if not any(v in s for v in l):             Potential_solution.append(l)             s.update(l) 

Run a second while loop for new_list if Step 5 meets the condition of there being only ONE {1,2,x}s )eg. {7,6,5} shown in step 4

Two Questions

How expensive would my algorithm be as an approximation for Three Cover?

And, what is the probability that my algorithm fails to find an Exact Three Cover when one exists?

Best algorithm for maximisation with two criteria

I am looking for the optimum algorithm for the following:

  • 10-15 players
  • Each player has between 20 and 40 cards.
  • Each card can have one of up to 200 possible characters, and a separate numerical rating (higher is better). Card’s characters may duplicate between players, or in players hands. Ratings are highly unlikely to exactly duplicate, though they may be close.

I need to select 5 ‘active’ cards from each players’ hands to meet the following criteria:

  1. All characters must be unique – no duplicates in any of the ‘active’ cards of any of the players, or across all players.
  2. The total of the players’ active cards’ ratings must be as high as possible.

Right now I:

1) go through all players and find the highest rated card still available; 2) mark it as active for the player whose card it is; and 3) mark that character as used for all other players (so it doesn't get used again). 

Repeat 1-3 until all players have 5 characters

This gives a pretty good result. But what if we had the following:

Player A: Character 1, rating 100 Character 2, rating 99 Character 3, rating 98  Player B: Character 1, rating 97 Character 4, rating 2 Character 5, rating 1 

For the sake of the example, assume we only need two active cards per players.

If Player A uses Character 1 per my algorithm then:

  • Player A: Character 1 + 2 = rating 199
  • Player B: Character 4 + 5 = rating 3
  • Total rating 201

Instead if Player A doesn’t use Character 1 then:

  • Player A: Character 2 + 3 = rating 197
  • Player B: Character 1 + 4 = rating 99
  • Total rating 296

So my algorithm does not produce the best team total rating.

Can anyone suggest a better algorithm, other than just brute force trying all the possible combinations to find the highest total rating? I wonder, for example, if there’s something about finding the optimum ratings for each player and then adjusting them to avoid duplication with other players; or perhaps something completely different.

Can an algorithm complexity be lower than its tight low bound / higher than its tight high bound?

The worst case time complexity of a given algorithm is $ \theta(n^3logn)$ .
Is it possible that the worst time complexity is $ \Omega(n^2)$ ?
Is it possible that the worst time complexity is $ O(n^4)$ ?
The average time complexity is $ O(n^4)$ ?

IMO it is possible as long as you control the constant $ c$ , but then what’s the point of mentioning any other bound than the tight bounds?

Finding the twiddle factors for FFT algorithm

I am trying to calculate the twiddle factors for the FFT algorithm and the iFFT algorithm, but i am unsure if i have it correctly calculated and was hoping some one could tell me if i have gone wrong as currently i get the wrong output for my FFT and i believe the twiddle factors might be the reason.

This is my code (in C#) to calculate them:

For _N = 4 and _passes = log(_N)/log(2) = 2

        //twiddle factor buffer creation         _twiddlesR = new Vector2[_N*_passes]; //inverse FFT twiddles         _twiddlesF = new Vector2[_N*_passes]; //forward FFT twiddles                  for (int stage = 0; stage < _passes; stage++)         {             int span = (int)Math.Pow(2, stage); // 2^n              for (int k = 0; k < _N; k++) // for each index in series             {                 int arrIndex = stage * _N + k; // get index for 1D array                                  // not 100% sure if this is correct for theta ???                 float a = pi2 * k / Math.Pow(2,stage+1);                  //inverse FFT has exp(i * 2 * pi * k / N )                 Vector2 twiddle = new Vector2(Math.Cos(a), Math.Sin(a));                  //forward FFT has exp(-i * 2 * pi * k/ N ) which is the conjugate                 Vector2 twiddleConj = twiddle.ComplexConjugate();                  /*this ternary checks if the k index is top wing or bottom wing                 the bottom wing requires -T top wing requires +T*/                  float coefficient = k % Math.Pow(2, stage + 1) < span ? 1 : -1;                  _twiddlesR[arrIndex] = coefficient * twiddle;                 _twiddlesF[arrIndex] = coefficient * twiddleConj;             }         } 

My debug data:

For inverse FFT twiddles:

First pass 1 + 0i 1 + 0i 1 + 0i 1 + 01 Second pass: 1 + 0i 0 + i 1 + 0i 0 + i 

For forward FFT twiddles:

First pass 1 + 0i 1 + 0i 1 + 0i 1 + 01 Second pass 1 + 0i 0 - i 1 + 0i 0 - i 

I am not convinced i have it right, but i am unsure what i have got wrong. Hoping some one who has a better understanding of this algorithm can spot my math error.

pfx file encryption algorithm

This seems like it should easily documented but I am unable to find.

My c# code does this to create a pfx file.

X509Certificate2 cert = store.Certificates.Find(X509FindType.FindByThumbprint, thumbPrint, false);                     File.WriteAllBytes("certFile.pfx", **cert.Export(X509ContentType.Pfx, password)**); 

The class X509Certificate2 is from System.Security.Cryptography.X509Certificates which appears to be a built-in .NET library.

I would like to know what encryption algorithm is being used to protect the pfx file. I want to confirm whether it is AES256 or not, but I can’t seem to find this information anywhere.

I tried running this OpenSSL command on my "certFile.pfx" file. I had trouble with password so I used "no password" command line. Does this mean that the pfx file is encrypted using TripleDES?!

enter image description here

Algorithm for cutting rods with minimum waste

Given a set of cuts and their lengths we need to find out the minimum number of rods (of constant length) and the cuts required which will lead to minimum wastage.

Here we bundle the rods and cut them all at once. So we can have a bundle with any number of rods.

For example:

Input data – Consider a rod of length 120 inches

( Quantity of Cuts Required, Length (in inches) ) = (5,16") , (5,30") , (24,36") , (4,18") , (4,28") , (6,20")

So here we required cuts such that we get 5 rods of 16 inches, 5 rods of 30 inches..so on.

Output:

Imagine each row (in the image) is a rod of 120 inches and each table is a bundle with rows as the number of rods in that bundle. So the first table is a bundle with 5 rods with cuts [16,30,36,36] and second table is a bundle of 4 rods with cuts [18,28,36,36] and so on. We can see that we have satisfied the input data we get (5,16") five rods of sixteen inches and so on.

enter image description here

Given input with (just like above) number of cuts and their lengths. how do we find the bundle of rods and their cuts having minimum amount of wastage?

Improve Prim’s algorithm runtime

Assume we run Prim’s algorithm when we know all the weights are integers in the range {1, …W} for W, which is logarithmic in |V|. Can you improve Prim’s running time?

When saying ‘Improving’, it means to at-least: $ $ O(|E|)$ $

My question is – without using priority queue, is it even possible? Currently, we learned that Prim’s runtime is $ $ O(|E|log|E|)$ $

And I proved I can get to O(|E|) when weights are from {1,….,W) when W is constant, but when W is logarithmic in |V|, I can’t manage to disprove/prove it.

Thanks