I could not get such links

I wanted to get links like this and could not
I tried it in every way for three years
But it failed
I know that GSA is a great program
But top marketers do not share their secrets
I read a lot of articles and watched a lot of courses, but it was useless

How does a shared vault in password managers such as 1Password work?

The password manager 1Password has a feature where multiple accounts in a group ("family") can share login information with each other.

From my understanding, a password manager is never supposed to know my passwords because they are encrypted with my master password before being sent "to the cloud".

How then can I decrypt / see the password that a family member shares with me through the Shared Vault without 1Password decrypting it?

If all passwords are encrypted with my private master password, how can it be possible that another user can decrypt it without me or the password manager knowing the master password of the other person?

Is there such a problem as b-Matching with different b values?

Consider a bipartite Garph $ G=(L \cup R, E)$ . Naturally, a b-Matching problem is to find a set of edges $ M \subset E$ , such that each node in $ L$ and $ R$ are adjuscent to maximum $ b$ neighbors, and a weight function $ w(e), e \in E$ is maximized. What if we have different $ b$ ? e.g., $ b(v)=5, \forall v \in R$ and $ b(v)=2, \forall v \in L$ . How do you call the problem? Is is constrained matching, or k-cardinality assignment, or what? I need to find some literature for it.

Thanks!

Finding $l$ subsets such that their intersection has less or equal than $k$ elements NP-complete or in P?


I have a set $ M$ , subsets $ L_1,…,L_m$ and natural numbers $ k,l\leq m$ .

The problem is:

Are there l unique indices $ 1\leq i_1,…,i_l\leq m$ , such that

$ \hspace{5cm}\left|\bigcap_{j=1}^{l} L_{i_{j}}\right| \leq k$

Now my question is whether this problem is $ NP$ -complete or not. What irritates me is the two constraints $ l $ and $ k$ because the NP-complete problems that were conceptually close to it that I took a look on (set cover, vertex cover) only have one constraint respectively that also appears in this problem.

I then tried to write a polynomial time algorithm that looks at which of the sets $ L_1,…,L_m$ share more than $ k$ elements with other sets but even if all sets would share more than $ k$ elements with other this wouldn’t mean that their intersection has more than $ k$ elements…

This question kind of comes close but in it there is no restriction on the amount of subsets to use and the size of the intersection should be exactly $ k$ , but maybe this could be useful anyways.

Can somebody further enlighten me ?

Every decidable lanugage $L$ has an infinite decidable subset $S \subset L$ such that $L \setminus S$ is infinite

Given an infinite decidable language $ L$ , then if $ S \subset L$ such that $ L \setminus S$ is finite, then $ S$ must be decidable. This is true since given a decider of $ L$ we contruct a decider for $ S$ :

Simulate the decider of $ L$ on the input, if it accepts, go over $ L \setminus S$ and check if it is there, if it is, reject. If it isn’t accept. If the decider of $ L$ rejects – reject.

Another point is if $ S \subset L$ is finite then $ S$ also must be decidable, this is immediate that every finite language is decidable.

Now we have the last case where $ S$ is infinite and $ L \setminus S$ is infinite. We know that there must be some subsets $ S$ corresponding to this case that are undecidable. This is since there are $ \aleph$ such $ S$ but only $ \aleph_0$ deciders. Denote $ D(L) = \{ S \subset L : |S|= |L \setminus S|=\infty \wedge S \text{ is decidable} \}$

Is it true that for all infinite decidable languages $ L$ we have $ D(L) \neq \phi$ ?

If this is true then as a conclusion we will have for all infinite decidable languages $ L$ a sequence of decidable languages $ L_n$ such that $ L_0=L$ and $ L_{n+1} \subset L_n$ and $ |L_n \setminus L_{n+1}| = \infty$

We will also have a limit-set $ L_\infty = \{ e \in L : \forall n \in \mathbb{N} \text{ } e \in L_n \}$ and can dicuss if it is empty/finite/infinite and decicable or not.

This seems like a nice way to study decidable languages, and curious to know if this direction is indeed interesting and whether there are articles published regarding these questions

Thanks for any help

Why does dueling fighting style add such a big bonus?

I’ve created a character in D&D Beyond. He’s a level 4 half-elf fighter. Here’s the character sheet:

Character sheet img 1 Character sheet img 2

The character has a longsword and D&D Beyond has calculated the damage roll on this to be 1d8+12. The +12 bonus seems too big according to my calculations. I’m calculating the roll like this:

  1. The base roll for a longsword is 1d8
  2. I add my strength modifier so that’s +4
  3. I have dueling as my fighting style speciality so that’s +2 because I’m wiedling the longsword with one hand

So I calculate the roll as 1d8+6. Where is D&D Beyond getting the extra +6 bonus from?

If I change the fighting style from dueling to say, archery, the bonus drops to +4, which seems correct. Why does the dueling fighting style add such a huge bonus?

Are there any enumerations of (machines for) languages in P such that all the machines can be simulated efficiently on any input?

From Computational Complexity A Modern Approach: Efficient Universal Turing Machine Theorem: There exists a TM U such that for every x, α ∈ {0, 1}∗, U(x, α) = Mα(x), where Mα denotes the TM represented by α. Furthermore, if Mα halts on input x within T steps then U(x,α) halts within CT log T steps, where C is a number independent of |x| and depending only on Mα’s alphabet size, number of tapes, and number of states.

From Kozen INDEXINGS OF SUBRECURSIVE CLASSES: "the class of polynomial time computable functions is often indexed by Turing machines with polynomial time counters…. The collection of all (encodings over (0, 1)) of such machines provides an indexing of PTIME…we have Theorem: No universal simulator for this indexing can run in polynomial space."

He then goes on to say: "can it be proved that gr U for any indexing of PTIME requires more than polynomial space to compute? We have proved this for a wide class of indexings, namely counter indexings satisfying the succinct composition property."

  • gr U is the graph of the universal function U and (barring details) represents the minimum power necessary to simulate P uniformly.

  • And the counter indexing (or polynomial time counters) he is referring to is specified in the answer here: How does an enumerator for machines for languages work?

I’m wondering how theorem for efficient universal Turing machine relates to Kozen’s result, that for certain types of enumerations of P, there is no machine that can efficiently simulate the machines enumerated. What causes simulation to be difficult and can it be circumvented–namely: does there exist an enumeration of P that describes the (machines for) languages in P in such a way that allows them to be efficiently simulated (with no more than poly overhead) on any input–or as Kozen puts it "allow(s) easy construction of programs from specifications"?

My guess is that part of the reason for the difficulty is because the efficient simulation theorem only says that there exists a machine that can efficiently simulate any single TM, but not a whole class of them… and when you start having to simulate more than one TM you loose the ability to optimize your simulator’s design to solve any particular language (run any particular machine) and have to design the simulator with the various machines you need to simulate in mind (and the more different those machines are the larger your overhead gets).

PS. A possible example could be in implicit complexity theory… where they construct languages that are complete for certain complexity classes. A language that is complete for P doesn’t seem to have trouble running its programs (which represent the languages in P)? But, if there are examples here, how do they overcome the difficulty Kozen is referring to, as they are programming systems and thus enumerations / indexings?

Just as a related aside… I think I have a proof that for a language Lp 1. and 2. cannot be true at the same time:

  1. An enumeration of P, call it language Lp (whose members are the strings in the enumeration) is decidable in poly time.

  2. All the P machines / languages represented by strings in Lp can be efficiently simulated by a universal simulator for P on any input.

It makes sense that there would be a relationship between the way the machines are encoded and the overhead for their simulation and 1. can be made to be true so that leaves 2. and brings us to the question being asked… Is it possible that 2. is always false–meaning for ANY enumeration/ encoding of P (any language Lp) simulation of those machines is not efficient for any universal simulator for P.

Here’s a rough sketch for anyone interested:

Take L:= {w∈L iff w∈Lp and W(w)=0}

So, one way to do this is our diagonal function maps w–>the language in P that w encodes (if w encodes a machine for a language in P (if w is a member of Lp)) and if it does not then it maps to the language containing all words. The existance of a map between a w and a language translates to w∈L iff w ‘is not a member’ of the language it is mapped to.

Since all w’s that aren’t encodings of P machines (members of Lp) are mapped to the language containing all words–they are members of L iff they are not members of this language. This is always false, so all words that are not members of Lp are not members of L.

This leaves only words of Lp as candidates for members of L. But for w to be a member of L not only does it need to be a member of Lp–the machine that w encodes, W, needs to evaluate to 0 on w. W(w)= 0 and w∈Lp <–> w∈L.

L is not in P. If L were in P then for some w, w would encode a machine for L, Wl. and for that w∈L iff Wl(w) = 0, ie. w∈L iff w is not in L.

Now, let’s employ the assumption that 1. Lp is poly decidable. As well as the assumption 2. that any machine specified by Lp can be simulated with no more than poly overhead by a universal simulator.

Then we can devise an algorithm, namely: given w, decide w∈Lp. If w∈Lp then run W(w). If w(w)=0 –> w∈L.

By the above algorithm, under these the assumptions 1. and 2. L would be in P–which is a contradiction to the previous proof by contradiction. I think the previous proof is correct and conclude that neither 1. nor 2. can be true at the same time.

Exceptions from exceptions, does such a thing exist?

I am intrigued: In many languages there are both normal control flow and exceptions.

But I never saw "an exception from an exception" or "an exception from an exception from an exception". Why?

Why there are just two variants?

My guess is that an "exception from an exception" would be just another class of exceptions, but this needs to be elaborated further.

Delete rows or columns of matrix containing invalid elements, such that a maximum number of valid elements is kept

Originally posted in stack-overflow but was told to post here.

Context: I am doing a PCA on a MxN (N >> M) matrix with some invalid values located in the matrix. I cannot infer these values, so I need to remove all of them, which means I need to delete the whole corresponding row or column. Of course I want to keep the maximum amount of data. The invalid entries represent ~30% of data, but most of it is completly fill in a few lines, few of it is scattered in the rest of the matrix.

Some possible approches:

  • Similar to this problem , where I format my matrix such that valid data entries are equal to 1 and invalid entries to a huge negative number. However, all proposed solutions are of exponential complexity and my problem is simpler.

  • Computing the ratio (invalid data / valid data) for each row or column, and deleting the highest ratio(s). Recompute the ratios for the sub-matrix and remove the highest(s) ratios. (not sure how many lines or columns we can remove safely in one step), and so on until there is no invalid data left. It seems like an okay solution, but I am unsure it always gives the optimal solution.

My guess is that it is a standard data analysis problem, but surprisingly I could not find a solution online.

I am seeing the error rpcinfo: can’t contact rpcbind: RPC: Remote system error – No such file or directory when running the rpcinfo command

So guys I am new to kali linux, sorry if this is a basic question but I am seeing this error message rpcinfo: can’t contact rpcbind: RPC: Remote system error – No such file or directory whenever I am running the command rpcinfo -p for NFS testing.