Do multiple sources of counting as one size larger for carrying capacity stack?

I am currently mocking up a Goliath Barbarian, and was wondering if there is a limit to the amount of sources of “You count as if you were one size larger for the purpose of determining your carrying capacity” that stack.

From prior browsing, I’ve found that Goliath’s Powerful Build & Totem Barbarian’s Bear Aspect feature stack in that regard, but could you stack, for example:

  • Powerful Build (innate Goliath feature)
  • 6th level Totem Barbarian Bear Aspect
  • Brawny feat

Essentially, could you double your carrying capacity and lift/pull capacity thrice?

Counting a number of sequences where the item is the most frequent one

Consider an array $ a$ , where elements are numbers. Consider all subsequence of $ a$ . For each subsequence $ s$ , we find an element $ k$ with the largest number of occurrences in $ s$ (if there are several options, choose the smallest such $ k$ ).

Problem: for each number $ k$ , find the number of subsequences of $ a$ for which $ k$ is the chosen number.


Input: [1, 2, 2, 3] Output: 1 -> 6 ([1], [1, 2], [1, 2'], [1, 3], [1, 2, 3], [1, 2', 3]) 2 -> 8 ([2], [2'], [2, 3], [2', 3], [2, 2'], [1, 2, 2'], [2, 2', 3], [1, 2, 2', 3]) 3 -> 1 ([3]) 

My idea is that, I make a map $ cnt$ where $ cnt[x]$ is number of times $ x$ occurred in array $ x$ . I am trying to find the answer for $ k$ then i will create a map $ temp$ where $ temp[i]=\min(cnt[k], cnt[i])$ . Then consider the value of $ cnt[k]$ to be $ m$ then in a subset element $ k$ can occur from $ 0$ to $ m$ times and using this i will find answer for all of its occurrences from $ 0$ to $ m$ which is a simple number of sub sets problem and finally i will have my answer.

But the complexity of my algorithms is bad as it is quadratic or worse. Can it be improved?

Counting a property of every sub-sets

Consider a array $ a$ of length $ n$ and $ a_i<n$ and a array of length $ n$ ,$ b=[0,0….(n$ $ times)]$ .Consider every sub-set of array $ a$ and denote it by $ s$ i need to find a number $ k$ with the largest number of occurrences or the most frequent one, If there are several options, choose the smallest one and then increase $ b[k]$ by one .I need to final $ b$ after considering all sub sets.

My idea is that ,I make a array $ val$ of length $ n$ where $ val[i]$ is number of times $ i$ occurred in array $ a$ .Consider that i am trying to find value for $ b[k]$ then i will create a array $ temp$ of length where $ temp[i]=min(val[k],val[i])$ .Then consider the value of $ val[k]$ to be $ m$ then in a subset element $ k$ can occur from $ 0$ to $ m$ times and using this i will find answer for all of its occurrences from $ 0$ to $ m$ which is a simple number of sub sets problem and finally i will have my answer.

But my algorithms complexity is bad as it quadratic or worse.Could anyone help me.

Counting circuits with constraints

Please forgive me if this question is trivial, I couldn’t come up with an answer (nor finding one).

In order to show that there are boolean functions $ f : \{0,1\}^n \rightarrow \{0,1\}$ which can be computed only using circuits of size $ \Omega(2^n/n)$ , we use a counting argument: there are at most $ O(2^{k \log k})$ circuits of size $ k$ , and $ 2^{2^n}$ such functions.

Suppose that I am interested in counting circuits of size $ k$ that compute different functions. The "simple" counting argument won’t work since it may be possible that two "syntactically" different circuits actually compute the same function. In other words, I want to bound the size of the set: $ $ F = \{ f: \{0,1\}^n \rightarrow \{0,1\} | f \text{ can be computed using a circuit of size }k \} $ $

Then $ |F| < $ the number of circuits of size $ k$ (since any circuit computes one function), but how can I bound $ |F|$ from below? (i.e. $ x <|F|$ )

Counting one’s in a stream of bits

I have to count the number of one’s in last $ m$ bits in a stream of bits and $ m \leq n,$ where $ n$ is the window size and it should take polylogarithmic space in $ n$ . I could only store last $ n$ bits and then count the number of one’s in those $ n$ bits but it takes $ O(n)$ space. How to achieve it in $ O(\log^k n)$ , $ k>0$ .
Any hint would be appreciated.

Analyzing a counting triangles streaming algorithm which uses $\ell_0$ sampling

I’m trying to analyze the following streaming algorithm for counting triangles (see below). It supposedly works also for dynamic graphs (i.e. "turnstile model", where edge deletions are allowed).

  1. What am I missing? According to my (buggy) analysis of the reported estimator’s variance, the variance equal to 0 (see below).
  2. How should I analyze the use in an $ \ell_0$ -sampler? (Currently, my analysis ignore it)


  1. Init: Define $ k := O(\frac {n^3} {\epsilon^2 t})$ . Pick $ k$ random subsets $ S_1, \ldots, S_k \subset V$ each of size 3 (with repetitions).
  2. Update: For each random subset of size 3, denoted $ S_i$ , maintain a counter $ x_{S_i}$ , which represent the number of internal edges in $ S_i$ , i.e. a number in $ \{0, 1, 2, 3\}$ .
  3. Output: Use an $ \ell_o$ -sampler over the vector $ x = (x_{S_1}, \ldots, x_{S_k})$ to pick an index $ j \in [k]$ (i.e. $ j$ is chosen uniformly at random from $ \{i \in [k] : x_{S_i} > 0\}$ ). Compute $ z := \mathbb{1}[x_{S_j} = 3], N = \|x\|_0$ . Report $ \tilde{T} = N \cdot z$ .

Analysis, where $ T$ is the number of triangles: $ $ \begin{split} E[\tilde T] &= E[N \cdot z] \ &= N \cdot E[z] \ &= \|x\|_0 \cdot \Pr \left[ \mathbb{1} \left[ x_{S_j} = 3 \right] \right] \ &= \|x\|_0 \cdot \frac T {\|x\|_0} \ &= T \end{split} $ $

Which seems to be ok. But for the variance: $ $ \begin{split} Var[\tilde T] &= Var[N \cdot z] \ &= N^2 \cdot Var[z] \ \end{split} $ $ Then $ $ \begin{split} Var[z] &= Var \left[ \mathbb{1} \left[ x_{S_j} = 3 \right] \right] \ &= E \left[ \left( \mathbb{1} \left[ x_{S_j} = 3 \right] \right)^2 \right] – \left( E \left[ \mathbb{1} \left[ x_{S_j} = 3 \right] \right] \right)^2 \ &= E \left[ \mathbb{1} \left[ x_{S_j} = 3 \right], \mathbb{1} \left[ x_{S_i} = 3 \right] \right] – \left( \frac {T} {\|x\|_0} \right)^2 \ &= \Pr \left[ \mathbb{1} \left[ x_{S_j} = 3 \right], \mathbb{1} \left[ x_{S_i} = 3 \right] \right] – \left( \frac {T} {\|x\|_0} \right)^2 \ &= \left( \Pr \left[ \mathbb{1} \left[ x_{S_j} = 3 \right] \right] \right)^2 – \left( \frac {T} {\|x\|_0} \right)^2 \ &= \left( \frac {T} {\|x\|_0} \right)^2 – \left( \frac {T} {\|x\|_0} \right)^2 \ &= 0 \end{split} $ $

Streaming algorithm for counting triangles in a graph

As described in the reference, the algorithm (see at the bottom) supposes to output an estimator $ \hat T$ for the # of triangles in a given graph $ G = (V, E)$ , denoted $ T$ . It is written that "it can easily be shown" that $ $ E[\hat T] = T $ $ But unfortunately, I’m not seeing it. Trying to analyze $ E[\hat T]$ , I think as follows:

  • At line 1, denote the probability to randomly (and uniformly) choose an edge which is part of a triangle as $ p$ . Since triangles can share edges, $ $ \frac T m \le p \le \frac {3T} m $ $ For example, consider the following case:

    enter image description here

    The central triangle doesn’t add new edges to the # of possibilities to choose an edge which is part of a triangle. You can imagine a different configuration, in which there are only the 3 outer triangles and they don’t touch each other (in this configuration, we won’t see the central 4th triangle). In both cases ((case i) 4 triangles as seen in the image; (case ii) 3 disjoint triangles), the probability to choose an edge which is part of a triangle is 1 (although the # of triangles is different).

  • At line 2, the probability to choose uniformly at random a vertex which "closes a triangle" with the edge from the previous step is exactly $ \frac 1 {n-2}$ .

Therefore I only see that

$ $ T \le E[\hat T] \le 3T $ $

What am I missing?

Another question I have is regarding line 3. The stream is ordered, and we first pick a random edge $ (u, v)$ (line 1), then a random vertex $ w$ from $ V \backslash \{u, v\}$ (line 2). I feel that the analysis should take into account that at line 3 we check whether $ (u, w)$ and $ (v, w)$ appear after $ (u, v)$ in the stream. Maybe after I’ll understand the answer to my first question, it will be clearer.


  1. Pick an edge $ (u, v)$ uniformly at random from the stream.
  2. Pick a vertex $ w$ uniformly at random from $ V \backslash \{u, v\}$
  3. If $ (u, w)$ and $ (v, w)$ appear after $ (u, v)$ in the stream, then output $ m(n-2)$ . Else, output $ 0$ .

Also, although I didn’t see it written, I believe there’s an assumption that $ V$ is known ahead.

Reference: Data streams lecture notes by Prof. Amit Chakrabarti, section "15.3 Triangle Counting",

Best regards

Complexity of enumeration vs complexity of counting

I have a problem understanding difference between complexity of enumeration and counting. We can solve every counting problem using enumeration algorithm.

Now, I have problem with the following. Let say that we have to count the number of positive integers less or equal than positive integer $ N$ . We know that the answer is $ N$ and therefore we can solve the counting problem in $ \mathcal{O}(1)$ . Corresponding enumeration problem is to output every positive number and this can be done in $ \mathcal{O}(N)$ steps and I believe that this is pseudo-polynomial complexity and that in fact complexity is exponential.

Are these complexity assumptions correct? Is the number of solutions of counting problem always bounded by the time complexity of corresponding enumeration algorithm?

Any hints or suggestions appreciated.

Counting the number of unique syntax trees of a grammar

Lets say we have some arbitrary grammar for which we would like to know how many different syntax trees does it generate. For example the following:

S -> A1|1B

A -> 10|C

B -> C1 | $ \varepsilon$

C -> 0|1

This is a quite simple language and the number of unique trees is not hard to figure out by just going through all of the different possibilities, but for more complex languages it gets difficult to count the number of unique trees by just examining it. For this reason, I’m wondering if there does exist some analytical approach, rules of thumb, or software that would allow to count the number of unique trees faster than just looking at the language long enough until all the possibilities are exhausted?