Yard Care Tolols Amazon Affiliate Review Website| Unique Contents

Yard Tools Products Review |100% Hand Written Reviews | Amazon Affiliate

Short Description:

This Website is Specially Designed For Newbies. Great opportunity to Make money from Google Adsense, Amazon and Clickbank. No Maintain & Exp.Required.

SUMMARY:

What is This Deal :

I am Providing you Professionally Designed WordPress Platform Based and SEO Friendly.

  • Domain : YardcareTools.info ( Free transfer to your…

Yard Care Tolols Amazon Affiliate Review Website| Unique Contents

Given $n$ unique items and an $m^{th}$ normalised value, compute $m^{th}$ permutation without factorial expansion

We know that the number of permutations possible for $ n$ unique items is $ n!$ . We can uniquely label each permutation with a number from $ 0$ to $ (n!-1)$ .

Suppose if $ n=4$ , the possible permutations with their labels are,

0:  1234 1:  1243 2:  1324 3:  1342 4:  1432 5:  1423 6:  2134 7:  2143 8:  2314 9:  2341 10: 2431 11: 2413 12: 3214 13: 3241 14: 3124 15: 3142 16: 3412 17: 3421 18: 4231 19: 4213 20: 4321 21: 4312 22: 4132 23: 4123 

With any well defined labelling scheme, given a number $ m, 0 \leq m < n!$ , we can get back the permutation sequence. Further, these labels can be normalised to be between $ 0$ and $ 1$ . The above labels can be transformed into,

0:       1234 0.0434:  1243 0.0869:  1324 0.1304:  1342 0.1739:  1432 0.2173:  1423 0.2608:  2134 0.3043:  2143 0.3478:  2314 0.3913:  2341 0.4347:  2431 0.4782:  2413 0.5217:  3214 0.5652:  3241 0.6086:  3124 0.6521:  3142 0.6956:  3412 0.7391:  3421 0.7826:  4231 0.8260:  4213 0.8695:  4321 0.9130:  4312 0.9565:  4132 1:       4123 

Now, given $ n$ and $ m^{th}$ normalised label, can we get the $ m^{th}$ permutation while avoiding the expansion of $ n!$ ? For example, in the above set of permutations, if we were given the $ m^{th}$ normalised label to be $ 0.9$ , is it possible to get the closest sequence 4312 as the answer without computing $ 4!$ ?

How can I generate a random sample of unique vertex pairings from a undirected graph, with uniform probability?

I’m working on a research project where I have to pair up entities together and analyze outcomes. Normally, without constraints on how the entities can be paired, I could easily select one random entity pair, remove it from the pool, then randomly select the next entity pair.

That would be like creating a random sample of vertex pairs from a complete graph.

However, this time around the undirected graph is now

  • incomplete
  • with a possibility that the graph might be disconnected.

I thought about using the above method but realized that my sample would not have a uniform probability of being chosen, as probabilities of pairings are no longer independent of each other due to uneven vertex degrees.

I’m banging my head at the wall for this. It’s best for research that I generate a sample with uniform probability. Given that my graph has around n = 5000 vertices, is there an algorithm that i could use such that the resulting sample fulfils these conditions?

  1. There are no duplicates in the sample (all vertices in the graph only is paired once).
  2. The remaining vertices that are not in the sample do not have an edge with each other. (They are unpaired and cannot be paired)
  3. The sample generated that meets the above two criteria should have a uniform probability of being chosen as compared to any other sample that fulfils the above two points.

There appear to be some work done for bipartite graphs as seen on this stackoverflow discussion here. The algorithms described obtains a near-uniform sample but doesn’t seem to be able to apply to this case.

Counting the number of unique syntax trees of a grammar

Lets say we have some arbitrary grammar for which we would like to know how many different syntax trees does it generate. For example the following:

S -> A1|1B

A -> 10|C

B -> C1 | $ \varepsilon$

C -> 0|1

This is a quite simple language and the number of unique trees is not hard to figure out by just going through all of the different possibilities, but for more complex languages it gets difficult to count the number of unique trees by just examining it. For this reason, I’m wondering if there does exist some analytical approach, rules of thumb, or software that would allow to count the number of unique trees faster than just looking at the language long enough until all the possibilities are exhausted?

Inserting many documents in collection, but only unique elements, based on specific field. MongoDB

I cannot seem to find an answer on this anywhere. I need the following:

Given an array of objects with the structure:

{    link: 'some-link',    rating: 25,    otherFields: '..',    ... }  I want to insert them into my collection. So I would just do insertMany... But I only want to insert those elements of the array that are unique, meaning that I do not want to insert objects with the field "link" being something that is already in my collection.  Meaning if my collection has the following documents:  {   _id: 'aldnsajsndasd',   link: 'bob',   rating: 34, } {    _id: 'annn',    link: 'ann',    rating: 45 } 

And I do the “update/insert” with the following array:

[{   link: 'joe',    rating: 10 },{   link: 'ann',   rating: 45 }, {   link: 'bob',   rating: 34 }, {   link: 'frank',   rating: 100 }] 

Only documents:

{   link: 'frank',   rating: 100 } {    link: 'joe',   rating: 10 } 

Would be inserted in my collection

Unique 3SAT to Unique 1-in-3SAT

Suppose I have a CNF formula with clauses of size 2 and 3. It has a unique satisfying assignment.

It was made from a binary multiplication circuit where I multiplied two primes numbers A and B such that A*B=S where S is a semiprime number. I added the conditions that A != 1, B != 1 and A <= B, then added the value of S to the formula make sure the assignment is unique. The only way to satisfy the formula is to put the values of primes A and B in correct order in the input bits.

3SAT can be reduced to 1-in-3 SAT. In 1-in-3SAT, we force that exactly 1 literal should be true in each triplet and two others false.

However, reductions do not seem to preserve the unicity of the assignment, by introducing new variables without forcing their value.

Can Unique 3SAT be reduced to Unique 1-in-3SAT…

  1. Without knowing the correct assignment?
  2. If not, while knowing the correct assignment?

Unique 1-in-3 SAT

Suppose I have a CNF formula with clauses of size 2 and 3. It has a unique satisfying assignment.

I know the value of each bit of the unique assignment because it was made from a binary multiplication circuit where I multiplied two primes numbers A and B such that A*B=S where S is a semiprime number. I added the conditions that A != 1, B != 1 and A <= B, then added the value of S to the formula make sure the assignment is unique. The only way to satisfy the formula is to put the values of A and B in correct order in the input bits.

The number of true literals in each clause is either 1, 2 or 3. Because I know the value of each bit, I can tell exactly which literals are true in each clause, and count them. For example, I know which clauses are satisfied by exactly 1 literal. And that literal is logically part of the unique solution.

My question is simple: If I take out all the clauses with more than 1 true literal, will the assignment necessarily still be unique?

In other words, if I wanted to write down a resolution proof (likely exponentially long) to demonstrate that the solution is unique (Another Solution Problem, in Co-NP), can I write it down using only the clauses with 1 true literal?

Intuitively, I think so, but I am unable to defend my point of view.

File authentifications via PGP versus via unique file hashes

Many files that are available for download (eg in github) come along with “asc” signature files attached, or with a sha256 file hash. Can someone please explain difference of PGP signatures vs file hashes?

My questions:

  • Is the purpose of both files (pgp/256hash) the same, to verify file authenticity/not manipulated?
  • When a downloaded file’s hash/pgp does not match the hash/pgp provided by the responsible developer, does it mean that only you downloaded that corrupt file? Or does it mean that everyone who downloads that file receives the corrupt file version? What I am getting at: Can the download process be pre-programmed by an attacker which download gets the corrupt and which one the correct file?

  • Which method is technically better suited for what situation?

  • Any technical “flaws” you are aware of for either of both verification methods? Why use pgp, is file 256hash not enough to verify file integrity?

Longest Even Length Palindromic Substring (with unique adjacent characters except for the center 2 letters)

You are given a string S containing lowercase English characters. You need to find the length of the largest subsequence of S that satisfies the following pattern: X1,X2,X3…Xn,Xn,…X3,X2,X1 where Xi is some character of S. The only constraint is that no adjacent character should be the same except Xn, that is Xi != X(i+1) for all 1<=i< n.

Input: The string: S

Output: The integer: 2n

Constraint: 1<=|S|<=10^3

Sample input 1: “acdbdea”

Sample output 1: 4

Explanation: “adda” is the longest subsequence following the given pattern.

Sample input 2: “abbacdeedc”

Sample output 2: 6

Explanation: “cdeedc” is the longest subsequence following the given pattern.

Sample input 3: “taker”

Sample output 3: 0

Explanation: No subsequence follows the given pattern.


This question was asked in a coding interview and I didn’t know how to solve it. I understood how to find longest palindromic sub sequence but don’t know how to implement the unique adjacent character part. Please help. pseudo code is fine