## Expectation for the number of leaf nodes in a randomized tree construction

Consider this procedure for building a tree from $$v_1, v_2, …, v_n$$:

• insert $$v_1$$
• insert $$v_2$$ and connect it to $$v_1$$ via a directional edge from $$v_2$$ to $$v_1$$
• insert $$v_3$$ and with a uniform probabiliy connect to $$v_1$$ or $$v_2$$
• insert $$v_n$$ and with a uniform porability connect it to $$v_1$$ or $$v_2$$,… $$v_{n-1}$$

The question is what is the expectation for the number of leaf nodes? Those with no other node connected to them.

My effort: I tried to define a random variable $$x_i$$ for a node $$i$$, which is 1 if it is leaf and 0 otherwise. Then I must find $$E(X) = \sum_i E(x_i)$$. However, finding the probability of $$Pr[x_i =1]$$ isn’t that easy. I guess it must be a conditional probability, which guarantees no node with be added to $$x_i$$ as we insert new nodes. If $$A_{ij}$$ shows the event that node $$j$$ is connected to node $$i$$, then I must have $$\bar A_{ij}$$ and then $$Pr[x_i == 1] = Pr[\bar A_{ij}].Pr[\bar A_{ij+1} | \bar A_{ij}]….$$

However, I’m not sure about my analysis and about the solution for it.

Could you please guide me?

## making the group completion in homology sense unique via the plus construction

A paper by Mcduff and Segal justify the following definition: A map of h-spaces $$X \to Y$$ is a group completion if the map is a localization on homology.

In the paper they prove that when $$X$$ is a topological monoid, the canonical map $$X \to \Omega B X$$ is a group completion in the above sense.

Unfortunately the example where $$X=\sqcup_n B\Sigma_n$$ shows that neither the space $$Y$$, nor the map, is unique up to homotopy:

Here the map $$X \to Y=\mathbb{Z} \times B \Sigma_\infty$$ is a group completion. So is the map $$X \to Y_+=\mathbb{Z} \times (B \Sigma_\infty)_+$$. $$Y$$ has homotopy groups that are concentrated in degree 1, whereas $$Y_+$$ has as its homotopy groups the stable homotopy groups of spheres.

In this case the map $$X \to Y \to Y_+$$ and $$X \to Y_+$$ are the same up to homotopy. This raises the question

Question: Given the H-space $$X$$, is any group completion $$X \to Y$$ followed by the plus construction map going to be unique up to homotopy?

## Algebraic construction of $\varepsilon$-biased sets

Let $$\ell> 1$$ be an integer and consider the mapping $$\text{Tr}:\mathbb{F}_{2^\ell}\to\mathbb{F}_{2^\ell}$$ defined by $$\text{Tr}(x)=x^{2^0}+x^{2^{1}}+\cdots+x^{2^{\ell-1}}$$ It is then possible to show the following

1. $$\text{Tr}$$ maps $$\mathbb{F}_{2^\ell}$$ into $$\mathbb{F}_2$$.
2. If $$a\in\mathbb{F}_{2^\ell}$$ is non-zero, then the mapping $$f_a:\mathbb{F}_{2^\ell}\to\mathbb{F}_{2}$$ defined by $$f_a(x)=\text{Tr}(a\cdot x)$$ is $$\mathbb{F}_2$$-linear and $$\mathbb{E}_{x\sim\mathbb{F}_{2^\ell}}[f(x)]=\frac{1}{2}$$.

Now, we consider the set $$S=\{s(x,y,z):x,y,z\in\mathbb{F}_{2^\ell}\}$$ such that we index the entries of $$s(x,y,z)$$ by $$0\leq i,j$$ such that $$i+j\leq c\sqrt{n}$$ ($$c$$ is a constant so that there are exactly $$n$$ entries). For such $$x,y,z$$ and $$i,j$$ we set $$s(x,y,z)_{i,j}=\text{Tr}(x^iy^jz)$$.

I want to show that for an appropriate choice of $$\ell$$, the set $$S$$ described above is an $$\varepsilon$$-biased set of size $$O(n\sqrt{n}/\varepsilon^3)$$.

So, fix a non-empty test $$\tau\in\{0,1\}^n$$, we need to show that $$\bigg|\mathbb{E}_{s\in S}\Big[(-1)^{\langle s,\tau\rangle}\Big]\bigg|\leq \varepsilon$$

Let $$x,y,z\in\mathbb{F}_{2^\ell}$$ and consider $$\langle s(x,y,z),\tau\rangle$$, I managed to show that (from $$\mathbb{F}_2$$-linearity above while indexing $$\tau$$ as we index $$s(x,y,z)$$) $$\langle s(x,y,z),\tau\rangle=\cdots=f_z\Big(\sum_{i,j}x^iy^j\tau_{i,j}\Big)$$ Finally, I thought of defining the bi-variate polynomial $$p_\tau(x,y)=\sum\limits_{i,j}x^iy^j\tau_{i,j}$$ and saying that since it is a non-zero polynomial of low degree at most $$2c\sqrt{n}$$ it attains each value of $$\mathbb{F}_{2^\ell}$$ with multiplicity at most $$2c\sqrt{n}2^\ell$$ (from Schwartz-Zippel), so $$\forall\alpha\in\mathbb{F}_{2^\ell}:\Pr\limits_{x,y\in\mathbb{F}_{2^\ell}}[p_\tau(x,y)=\alpha]=O(\sqrt{n}/2^\ell)$$.

I want to use it but I am stuck…, maybe we can say that the distribution of $$p_\tau(x,y)$$ is close enough to $$U_{\mathbb{F}_{2^\ell}}$$ in statistical distance in order to infer that the expeced value of $$f_z(p_\tau(x,y))$$ is close enough to $$1/2$$?

## Power operations from a Tate construction

In an action-packed three pages of Lurie’s DAG-XIII: Rational and p-adic Homotopy Theory, section 2.2: Power Operations on $$\mathbb{E}_{\infty}$$-algebras, one finds a construction of the power operation $$P^0$$ following a few observations on the $$p$$-power Tate construction in the category of $$k$$-module spectra: $$\hat{T}_p: X \mapsto ( X^{\otimes p})^{tC_p}$$ and its best colimit-preserving approximation, $$T_p.$$ For any $$\mathbb{E}_\infty$$ $$k$$-algebra $$X,$$ one obtains a map $$T_p(X)[-1] \to X.$$ $$T_p(X)$$ is given by tensoring with a $$k$$-bimodule which is equivalent on one side to $$k^{tC_p}$$ and this allows us to obtain operations (not $$k$$-linear) from elements of Tate cohomology, $$\pi_* k^{tC_p} \simeq \hat{H}^{-*}(C_p, k).$$

The precise statement is in construction 2.2.6, which applies the observation that for $$k$$ a discrete ring of characteristic $$p$$, $$1 \in k$$ determines a canonical element of $$\hat{H}^{-1}(C_p, k),$$ precisely because that group is given as the kernel of the norm. This defines a map $$k \to k^{tC_p}[-1]$$ which upon composition with the map in the previous paragraph gives a map $$X \to X.$$ This map is supposed to be the derived witness to $$P^0.$$

Here is remark 2.2.9

Construction 2.2.6 can be generalized: given any class $$x \in \hat{H}^{n-1}(\mathbb{Z} / p\mathbb{Z}; k)$$, we obtain an associated map $$P(x) : A \to A[n]$$, which induces group homomorphisms $$\pi_m(A) \to \pi_{m-n}(A)$$. These operations depend functorially on $$A$$ and generate an algebra (the extended Steenrod algebra) of “power operations” which act on the homotopy groups of every $$\mathbb{E}_\infty$$-algebra over $$k.$$

My questions: is there any reference where this construction of the extended powers is fully elaborated? How much of the elementary structure of the Steenrod algebra (e.g. Adem relations, structure of the dual Steenrod algebra, etc.) can be translated to this point of view?

Just to fill in a few details, Lurie constructs these power operations by first rotating the fiber sequence defining the Tate construction to yield

$$\hat{T}_p(X)[-1] \to (X^{\otimes p})_{hC_p} \to (X^{\otimes p})^{hC_p}$$

If $$X$$ is an $$\mathbb{E}_\infty$$ $$k$$-algebra, one then computes the composition

$$T_p(X)[-1] \to \hat{T}_p(X)[-1] \to (X^{\otimes p})_{hC_p} \to X^{\otimes p}_{h\Sigma_p} \to X$$

where the maps are given by approximation, the first map in the rotated fiber sequence, a tautological map between colimits, and the $$\mathbb{E}_\infty$$ multiplication respectively.

## SLR(1) Table construction, FIRST and FOLLOWS

when constructing the SLR(1) table for some grammar I need to compute the FOLLOW set for all terminals in order to decide where and when to reduce.

Do I compute theme for the augmented grammar or the original one?
Is there even a difference?

## Does the Eilenberg Moore Construction Preserve fibrations?

Say we have a Grothendieck fibration $$p : E \to B$$ and a monad $$T$$ on $$B$$ and a lift $$T’$$ of $$T$$ to $$E$$, i.e. a monad on $$E$$ such that $$pT’ = Tp$$ and $$p$$ preserves $$\eta, \mu$$.

Then because the Eilenberg–Moore construction is functional, we have a morphism $$EM(p)$$ from $$T’\text{-}Alg$$ to $$T\text{-}Alg$$. Under what conditions is $$EM(p)$$ a fibration?

## Tracker for object construction, copy and movement

I made an object tracker for debugging and testing purposes called ccm_counter (construction, copy, move counter). It counts constructor, copy and move calls. It can be used to detect inefficient forwarding and unnecessary (or unintended) copies. The behavior is as follows:

# What it does

Let vector<A> be the type you want to test and A its element type (A must be move and copy constructable and must not be final). Drop in the counter like vector<ccm_counter<A>>. ccm_counter behaves like A except that it also counts:

1. constructor calls
2. copy constructor calls
3. move constructor calls
4. copy assignment calls
5. move assignment calls
6. copies
7. moves

Point 1 to 5 are bound to the object (or variable) while point 6 and 7 are bound to the value of the object. For example:

struct A{};  ccm_counter<A> a, b;  // a and b increment constructor call count by 1 b = a;  // a +1 copies -> import copies and moves from a -> b +1 copy assignment  a = b;  // b +1 copies -> import copies and moves from b -> a +1 copy assignment  b = a;  

This example results in (shortened)

a: constructor calls:      1 copy assignment calls:  1 total copies made:      3  b: constructor calls:      1 copy assignment calls:  2 total copies made:      3 

Now we add the vector<A> (the examples uses std::vector):

struct A{};  vector<ccm_counter<A>> vec; ccm_counter<A> a, b; vec.shrink_to_fit(); vec.push_back(a); vec.push_back(a); vec.push_back(a); vec.push_back(b);  for (int i = 0; i < vec.size(); i++)     cout << "element " << i << ":\n" << vec[i].get_object_stats() << "\n" << *vec[i].get_value_stats() << "\n\n"; 

which prints:

element 0: constructor calls:      0 copy constructor calls: 0 move constructor calls: 1 copy assignment calls:  0 move assignment calls:  0 total copies made:      3 total moves made:       6  element 1: constructor calls:      0 copy constructor calls: 0 move constructor calls: 1 copy assignment calls:  0 move assignment calls:  0 total copies made:      3 total moves made:       6  element 2: constructor calls:      0 copy constructor calls: 0 move constructor calls: 1 copy assignment calls:  0 move assignment calls:  0 total copies made:      3 total moves made:       6  element 3: constructor calls:      0 copy constructor calls: 1 move constructor calls: 0 copy assignment calls:  0 move assignment calls:  0 total copies made:      1 total moves made:       0 

The constructor and assignment call counts are not very useful in this example, because they are bound to the object itself, which, in this case, is just an element in the internal buffer of the vector (which also changes as soon as the buffer is reallocated). However, the total copies and moves were tracked by the value. Element 0 gets copied when we call vec.push_back(a) the first time. Element 1 gets copied when we call vec.push_back(a) the second time. Because the vector exceeds his size each time we call vec.push_back(...), its internal buffer has to be reallocated each time. This results in element 0 beeing moved into the new buffer which is reflected in the output and so on.

# Implementation

#include <algorithm> #include <cmath> #include <iomanip> #include <memory> #include <ostream> #include <type_traits>  struct ccm_object_stats {     std::size_t ctor = 0;        // number of (any, except copy and move) constructor calls     std::size_t copy_ctor = 0;   // number of copy constructor calls     std::size_t copy_assign = 0; // number of copy assignment calls     std::size_t move_ctor = 0;   // number of move constructor calls     std::size_t move_assign = 0; // number of move assignment calls      constexpr ccm_object_stats& operator+=(const ccm_object_stats& rhs)     {         ctor += rhs.ctor;         copy_ctor += rhs.copy_ctor;         move_ctor += rhs.move_ctor;         copy_assign += rhs.copy_assign;         move_assign += rhs.move_assign;         return *this;     }      constexpr ccm_object_stats operator+(const ccm_object_stats& rhs)     {         ccm_object_stats ret(*this);         return ret += rhs;     } };  std::ostream& operator<<(std::ostream& os, const ccm_object_stats& stats) {     using namespace std;     constexpr size_t sw = 24;     const size_t nw = static_cast<size_t>(         log10(max({ stats.ctor, stats.copy_ctor, stats.copy_assign, stats.move_ctor, stats.move_assign }))) + 1;      os         << setw(sw) << left << "constructor calls: " << setw(nw) << right << stats.ctor << "\n"         << setw(sw) << left << "copy constructor calls: " << setw(nw) << right << stats.copy_ctor << "\n"         << setw(sw) << left << "move constructor calls: " << setw(nw) << right << stats.move_ctor << "\n"         << setw(sw) << left << "copy assignment calls: " << setw(nw) << right << stats.copy_assign << "\n"         << setw(sw) << left << "move assignment calls: " << setw(nw) << right << stats.move_assign;     return os; }  struct ccm_value_stats {     std::size_t copies = 0; // number of copies made     std::size_t moves = 0;  // number of moves made      constexpr ccm_value_stats& operator+=(const ccm_value_stats& rhs)     {         copies += rhs.copies;         moves += rhs.moves;          return *this;     }      constexpr ccm_value_stats operator+(const ccm_value_stats& rhs)     {         ccm_value_stats ret(*this);         return ret += rhs;     }  };  std::ostream& operator<<(std::ostream& os, const ccm_value_stats& stats) {     using namespace std;     constexpr size_t sw = 24;     const size_t nw = static_cast<size_t>(         log10(max({ stats.copies, stats.moves }))) + 1;      os         << setw(sw) << left << "total copies made: " << setw(nw) << right << stats.copies << "\n"         << setw(sw) << left << "total moves made: " << setw(nw) << right << stats.moves;     return os; }  // A wrapper object that inherits from T and counts construction, copy and move operations of T.         template <typename T> class ccm_counter : public T { public:     template <typename ...Args, typename = std::enable_if_t<std::is_constructible_v<T, Args...>>>     constexpr explicit ccm_counter(Args... args) noexcept(std::is_nothrow_constructible_v<T, Args...>)         : T(std::forward<Args>(args)...), _val_stats{ std::make_shared<ccm_value_stats>() }     {         _obj_stats.ctor++;     }      constexpr ccm_counter(const ccm_counter& other) noexcept(std::is_nothrow_copy_constructible_v<T>)         : T(other), _val_stats(other._val_stats)     {         static_assert(std::is_copy_constructible_v<T>, "T must be copy constructible.");         _val_stats->copies++;         _obj_stats.copy_ctor++;     }      constexpr ccm_counter(ccm_counter&& other) noexcept(std::is_nothrow_move_constructible_v<T>)         : T(other), _val_stats(other._val_stats)     {         static_assert(std::is_move_constructible_v<T>, "T must be move constructible.");         _val_stats->moves++;         _obj_stats.move_ctor++;     }      constexpr auto operator=(const ccm_counter& other) noexcept(std::is_nothrow_copy_assignable_v<T>)         -> std::enable_if_t<std::is_copy_assignable_v<T>, ccm_counter&>     {         T::operator=(other);         _val_stats = other._val_stats;         _val_stats->copies++;         _obj_stats.copy_assign++;         return *this;     }      constexpr auto operator=(ccm_counter&& other) noexcept(std::is_nothrow_move_assignable_v<T>)         -> std::enable_if_t<std::is_move_assignable_v<T>, ccm_counter&>     {         T::operator=(other);         _val_stats = other._val_stats;         _val_stats->moves++;         _obj_stats.move_assign++;         return *this;     }      [[nodiscard]] constexpr ccm_object_stats get_object_stats() const noexcept     {         return _obj_stats;     }      constexpr void set_object_stats(ccm_object_stats stats)     {         _obj_stats = std::move(stats);     }      [[nodiscard]] constexpr std::shared_ptr<ccm_value_stats> get_value_stats() const noexcept     {         return _val_stats;     }      constexpr void set_value_stats(std::shared_ptr<ccm_value_stats> stats)     {         _val_stats = std::move(stats);     }  private:     ccm_object_stats _obj_stats{};     std::shared_ptr<ccm_value_stats> _val_stats; };  template <typename T> std::ostream& operator<<(std::ostream& os, const ccm_counter<T>& counter) {     return os << counter.get_ccmd_stats(); } 

## Edit

I am very sorry but I had a serious bug in the implementation which I had to change. I hope noone is already writing a review. If this change is against the rules I will revert it.

## JavaScript Code Construction for recording Node Tree Changes

I have a node tree structure that renders database information in the browser using vis.js – see this. I won’t use the actual table entities, but suppose it’s something like:

building floor office room 

In particular, I am wanting to write an object that holds an array of all the node details. Each node contains the following data:

id description short_description date_inserted date_updated db_primary_key(only upon initialisation) 

Each edge holds a relation linking the id of two nodes like this:

 {from:'foo',to:'bar'} 

The node tree clearly represents the underlying foreign key relations in a database. A building (root node) has 1 or more floors (children of the top node), a floor has 1 or more offices (grandchildren of the top node), an office has 1 or more rooms (great-grandchildren of the top node).

Vis.js graph has an edit function that allows the user to create nodes, update their position and I’ve extended this to store custom attributes (like the short_description). I’m wanting to keep track of the changes made to the data in an object like this:

var NodeTree = {         collection:{};//all node objects         toCreate:{b:[],f:[],o:[],r:[]},//newly created nodes - use ids or the object itself?         toEdit:{b:[],f:[],o:[],r:[]},//edited attribute of node - use ids or the object itself?         toDelete:{b:[],f:[],o:[],r:[]},//deleted node from tree - use ids or the object itself?         create: function(type,node){             this.toCreate[type].push(node);         }//function to add created node?     }    var NodeEdges = {         collection:[];         toCreate:{b:[],f:[],o:[],r:[]},         toEdit:{b:[],f:[],o:[],r:[]},         toDelete:{b:[],f:[],o:[],r:[]},         create: function(type,node){             this.toCreate[type].push(node);         }     } 

The idea is that the user can make changes to the Node Tree and then post these to a server that will update the database.

To put it succinctly, I have a hashmap of nodes and an array of edges written in JavaScript. I want to be able to keep a track of these changes so I can post them to the server. How do I best go about writing this in an OO way using JavaScript? I’m hoping to not have to post the entire NodeTree and NodeEdge objects.

## Parallelize the construction of sparse matrices

I’m trying to parallelize a few constructions of chain complexes (given as a list of sparse matrices). I have a function that for any column (or row) computes the list of nonzero entries in that column (row), but several entries can appear at the same place $$(i,j)$$ in the matrix (their values should be summed).

For example, given a simplicial complex as a list bases (simplices of given dimension), I write:

chCx[bases_] := Module[ {dim=Length@bases, dims=Length/@bases, basesk, baseskk, bdrs={}, bdr, x},    basesk =Association@Table[bases[[1,i]]->i,{i,dims[[1]]}];      Do[baseskk=Association@Table[bases[[k,i]]->i,{i,dims[[k]]}];  ... AppendTo[bdrs,bdr]; basesk=baseskk;, {k,2,dim}]; bdrs]; 

Then I replace ... with 4 different commands, to get functions chCx, chCx0, chCx1, chCx2:

bdr = SparseArray[{},Length/@{basesk,baseskk}]; Do[bdr[[basesk[Delete[s,{{r}}]],baseskk[s]]]+=(-1)^(r+1),{r,1,k},{s,Keys@baseskk}]; 

and

bdr=SparseArray[{},Length/@{basesk,baseskk}]; SetSharedVariable[bdr]; DistributeDefinitions[basesk,baseskk]; ParallelDo[bdr[[basesk[Delete[s,{{r}}]],baseskk[s]]]+=(-1)^(r+1), {r,1,k},{s,Keys@baseskk}]; 

and

bdr = ParallelCombine[ Module[{bdri=SparseArray[{},Length/@{basesk,baseskk}]},  Do[bdri[[basesk[Delete[s,{{r}}]],baseskk[s]]]+=(-1)^(r+1), {r,1,k},{s,#}]; bdri]&, Keys@baseskk, Plus]; 

and

bdr = SparseArray[Flatten[ParallelTable[{basesk[Delete[s,{{r}}]],baseskk[s]}->(-1)^(r+1),  {r,1,k},{s,Keys@baseskk}],1],Length/@{basesk,baseskk}]; 

When I use these commands with

n=12; bases=Table[Subsets[Range@n,{k}],{k,1,n-1}]; (*sphere*) AbsoluteTiming[chCx[bases]][[1]] AbsoluteTiming[chCx0[bases]][[1]] AbsoluteTiming[chCx1[bases]][[1]] AbsoluteTiming[chCx2[bases]][[1]] 

on a 2-core i3-560 CPU, I get:

0.774023

338.296

32.9942

17.921

Why is the nonparallelized code still the fastest? How can I efficiently parallelize this construction?