## Do all Cellular Automata have some kind of information boundary? Can all Cellular Automata be modelled with the Bekenstein Bound?

Since they are discrete models, do they have some kind of information boundary? Can all Cellular Automata models be related to the Bekenstein Bound?

https://en.wikipedia.org/wiki/Bekenstein_bound

## Are Cellular Automata models related to the Bekenstein bound?

Cellular Automata are discrete models which have a regular finite dimensional grid of cells, each in one of a finite number of states, such as on and off.

There are various scientists that have combined Cellular Automata with the Holographic Principle (like Gerard ‘t Hooft, Seth Lloyd, Paola Zizzi…etc) to describe the universe.

This makes me think that all Cellular Automata models are directly with the Bekenstein bound (https://en.wikipedia.org/wiki/Bekenstein_bound)and thus with holography, but I would need confirmation from an expert.

So, are literally all cellular automata models related with the Bekenstein bound or holography in general?

## Bound on difference of log of unitary matrices

Suppose I have two unitary matrices $$u, v$$ such that $$\|u-v\|<\epsilon$$ in the operator norm. Is there a way to bound the quantity $$\|\log u-\log v\|$$? We can assume that $$\epsilon$$ is sufficiently small and we choose a branch cut for the logarithm such that eigenvalues of $$u$$ and $$v$$ will not be split up. Ideally I would like to get a bound in the form $$\|\log u-\log v\| where $$C$$ is a constant independent of the dimension of the matrix. Thanks!

## Understanding Gillman’s proof of the Chernoff bound for expander graphs

My question is about Claim 1 in the proof here: Gillman (1993). At the end of the proof, the author says:

The matrix product $$U^\top\sqrt{D^{-1}}(P+(\mathrm{e}^x-1)B(0)-\mu I)\sqrt{D}U$$, which is equal to $$(D’-\mu I)(I+(D’-\mu I)^{-1}(\mathrm{e}^x-1)D’U^\top D_A U)$$, is singular. Therefore,

\begin{align*} 1&\leq \lVert (D’-\mu I)^{-1}(\mathrm{e}^x-1)D’U^\top D_A U \rVert_2 \ &\leq \frac{1}{\mu-\lambda_2}(\mathrm{e}^x -1). \end{align*}

(The first inequality uses the continuity of the function $$\lambda_2(y)$$.)

I understand why the two expressions at the beginning are equal and I understand the second inequality, but I do not understand the first inequality and also why the matrix product is singular.

Let me provide the definitions so you can avoid reading the whole paper. There is a weighted undirected graph $$G=(V, E, w)$$ where $$w_{ij}=0$$ if $$\{i,j\}\notin E$$. Denote $$w_i:=\sum_j w_{ij}$$. Let $$P$$ denote the transition matrix, so $$P_{ij}:=\frac{w_{ij}}{w_i}$$. Denote by $$\lambda_2$$ the second largest eigenvalue of $$P$$ and by $$\epsilon:=1-\lambda_2$$ the spectral gap. Next, let $$M$$ be the weighted adjacency matrix $$M_{ij}:=w_{ij}$$. Let $$A$$ be a set of vertices and $$\chi_A$$ be an indicator function. Some more definitions are:

\begin{align*} &E_r:=\operatorname{diag}(\mathrm{e}^{r\chi_A}) & &P(r):=PE_r \ &D:=\operatorname{diag}(\frac{1}{w_i}) & &S:=\sqrt{D}M\sqrt{D} \ &S_r:=\sqrt{DE_r}M\sqrt{DE_r} & & B(r):=\frac{1}{\mathrm{e}-1}(P(r+1)-P(r)) \end{align*}

Moreover, since $$S$$ is unitarily diagonalizable, there exist a unitary matrix $$U$$ and a diagonal matrix $$D’$$ such that $$D’=U^\top SU$$. Furthermore, there exists a diagonal matrix $$D_A$$ such that $$B(0)=PD_A$$.

Define $$\lambda(r)$$ to be the largest eigenvalue of $$P(r)$$ and $$\lambda_2(r)$$ to be its second largest eigenvalue. As before, $$\epsilon_r := \lambda(r)-\lambda_2(r)$$ is the spectral gap.

In Claim 1 the author lets $$0\leq x\leq r$$ be some number. He also defines $$\mu<\lambda(x)$$ to be any other eigenvalue of $$P(x)$$. At the end of the proof, we are only interested in $$\mu>\lambda_2$$.

Some other facts are:

\begin{align*} &\lVert D’ \rVert_2 = \lVert D_A \rVert_2 = 1 & &D’=U^\top\sqrt{D^{-1}}P\sqrt{D}U \ &P(0)=P & &\lambda(0)=1 & &\lambda_2(0)=\lambda_2 \ &P=\sqrt{D}S\sqrt{D^{-1}} & &P(r)=\sqrt{DE_r^{-1}}S_r\sqrt{E_rD^{-1}} \end{align*}

I hope I didn’t miss anything relevant. Thank you for your help.

## A uniform upper bound for Fredholm index of quasi Laplace operators on a compact parallelizable manifold

Assume that $$M$$ is a compact parallelizable manifold. Is there an upper bound for the absolute value of Fredholm index of all operators in the form $$D=\sum_{i=1}^n \partial^2/\partial{X_i^2}$$ where $$\{X_1,X_2,\ldots,X_n\}$$ is a global smooth frame?

## Parseval Type Lower Bound on Sum of Squares of Function Projections

This is related to a previously answered question here answered by @WillieWong

Let $$f:\mathbb{Z}\rightarrow \mathbb{C}.$$ Assume that the support of $$f$$ is finite, say it is contained in $$[1,N],$$ it can even be taken to be $$[1,N]$$ if it helps, and that $$\mid f\mid$$ is not only nonzero but essentially constant on its support.

Define the fourier transform $$\widehat{f}:[0,1)\rightarrow \mathbb{C}$$ by $$\widehat{f(t)}=\sum_{n\in \mathbb{Z}} f(n)~e^{2i \pi n t}.$$

Now let $$v$$ be a positive integer $$\geq 2,$$ and let the “projected” function be $$f_v(n)=\left\{ \begin{array}{ccc} f(n), & \quad\mathrm{if}\quad & v|n,\ & & \ 0 & & \mathrm{otherwise}. \end{array} \right.$$ Write $$f=f_1$$ for notational simplicity.

I am interested in a specific Parseval type relationship for this function, maybe expressed in terms of the transform of the original function?

Specifically, can one obtain a nontrivial bound of the form $$\sum_{v=1}^m \mid \sum_{n \in \mathbb{Z}} f_v(n) \mid^2 {\geq} A(N,m) \int_0^1 \mid\widehat{f(t’)} \mid^2 \,dt’$$ using some kind of uncertainty relation.

We can take $$m\ll N,$$ a fractional power of $$N$$ or even a power of $$\log N.$$

## Find an Asymptotic Upper Bound using a Recursion Tree

The problem is this: Use the recursion-tree method to give a good asymptotic upper bound on T(n) = 9T(n^(1/3)) + Big-Theta(1). I am able to get the tree started and find a pattern with the sub-problems, but I am having difficulty finding the total cost of the running times throughout the tree. I can not figure out how to get the number of sub-problems at depth i when n=1. I have a feeling the answer is O(log3(n)), but I can not verify that at the moment. Any help would be appreciated.

T(n) = 9T(n^(1/3)) + Big-Theta(1) can be written as: T(n) = 9T(n^(1/3)) + C, where C is some constant since any constant will always be treated as 1 asymptotically. My recursion tree is explained by each level below: Level 0: This is the constant C

Level 1: T(n^(1/3)) is written 9 times which represent the sub-problems of C. This adds up to 9cn^(1/3).

Level 2: Each of the 9 sub-problems from level 1 gets divided into 9 more sub-problems, which are each written as T(n^(1/9)). All of these add up to 81cn^(1/9).

Sub-Problem Sizes and Nodes: The number of nodes at depth i is 9^i We know that the sub-problem size for a node at depth i is n^(1/(3^i)). The problem size hits n=1 when this size equals 1. Solving for i yields:

(n^(1/(3^i)))^(3i) = 1^(3i) n = 1^(3i). This results in n being 1 which doesn’t give a logarithmic form!

## A Lower bound on the sum of Bernoulli random variables given a constraint on its distribution

Given a set of Bernoulli random variables $$x_1, \dots, x_n$$ (not necessarily identical) with $$X= \sum_{0, I am intrested in finding a lower-bound for $$\frac{\mathbb{E} [ \min (X,k) ]}{\mathbb{E} [X]}$$ in terms of $$k$$ and $$\alpha$$ where $$\alpha > \Pr[X>k]$$. For example, I want to show that this ratio is a large enough constant for $$\alpha=0.2$$ and $$k=4$$.

## A bound for $[\mathbb{C}(x,y,z):\mathbb{C}(p,q,r)]$, where $\operatorname{Jac}(p,q,r) \in \mathbb{C}^{\times}$

Y. Zhang (in his PhD thesis) and P. I. Katsylo proved the following nice result; the two proofs are different, see: Zhang’s thesis and Katsylo’s paper:

Let $$f: (x,y) \mapsto (p,q)$$ be a $$k$$-algebra endomorphism of $$\mathbb{C}[x,y]$$ having an invertible Jacobian, namely, $$\operatorname{Jac}(p,q):=p_xq_y-p_yq_x \in \mathbb{C}-\{0\}$$. Then the degree of the field extension $$\mathbb{C}(p,q) \subseteq \mathbb{C}(x,y)$$ is $$\leq \min{ \{\deg(p),\deg(q)\}}$$.

Now let $$f: (x,y,z) \mapsto (p,q,r)$$ be a $$k$$-algebra endomorphism of $$\mathbb{C}[x,y,z]$$ having an invertible Jacobian, namely, $$\operatorname{Jac}(p,q,r) \in \mathbb{C}-\{0\}$$.

Is the following claim true?

The degree of the field extension $$\mathbb{C}(p,q,r) \subseteq \mathbb{C}(x,y,z)$$ is $$\leq (\min{ \{\deg(p),\deg(q),\deg(r)\})^2}$$.

Any hints and comments are welcome! Thank you.

## How do you empirically estimate the most popular seat and get an upper bound on total variation?

Say there are $$n$$ seats $$\{s_1, …, s_n\}$$ in a theater and the theater wants to know which seat is the most popular. They allow $$1$$ person in for $$m$$ nights in a row. For all $$m$$ nights, they record which seat is occupied.

They are able to calculate probabilities for whether or not a seat will be occupied using empirical estimation: $$P(s_i ~\text{is occuped})= \frac{\# ~\text{of times} ~s_i~ \text{is occupied }}{m}$$. With this, we have an empirical distribution $$\hat{\mathcal{D}}$$ which maximizes the likelihood of our observed data drawn from the true distribution $$\mathcal{D}$$. This much I understand! But, I’m totally lost trying to make this more rigorous.

• What is the upper bound on $$\text{E} ~[d_{TV}(\hat{\mathcal{D}}, \mathcal{D})]$$? Why? Note: $$d_{TV}(\mathcal{P}, \mathcal{Q})$$ is the total variation distance between distributions $$\mathcal{P}$$ and $$\mathcal{Q}$$.
• What does $$m$$ need to be such that $$\hat{\mathcal{D}}$$ is accurate to some $$\epsilon$$? Why?
• How does this generalize if the theater allows $$k$$ people in each night (instead of $$1$$ person)?
• Is empirical estimation the best approach? If not, what is?

If this is too much to ask in a question, let me know. Any reference to a textbook which will help answer these questions will happily be accepted as well.