How can I show this language is Regular? (Foundations of computing)

Σ(alphabet) contains 3×1 column matrices [[0],[0],[0]], [[0],[0],[1]] , [[0],[1],[0]]… [[1],[1],[1]] such that each element in the language is a THREE SET of the columns where the bottom row is the sum of the two top rows

example [[0],[0],[1]], [[1],[0],[0]] , [[1],[1],[0]] is an element of the Language

BUT

[[0],[0],[1]], [[1],[0],[1]] , [[0],[0],[0]] is not an element of the language.

Computing similarity of two graphs with partially overlapping sets of nodes

Consider two graphs $ G_1 = (E_1, V_1) $ and $ G_2 = (E_2, V_2)$ with their associated sets of edges $ E$ and nodes $ V$ . I’m familiar with concepts such as edit distance for computing the similarity/distance between these two graphs. I was wondering if there are any metrics for estimating similarity between graphs where they contain only partially overlapping sets of nodes, for example, if $ V_1 = (A, B, C) $ and $ V_2 = (B, C, D)$ .

Does the Hack computer from “The Elements of Computing Systems” use Von Neumann architecture?

I’m reading “The Elements of Computing Systems” (subtitled “Building a Modern Computer from First Principles – Nand to Tetris Companion) by Noam Nisan and Shimon Schocken.

Chapter 4 is about machine language, and more specifically the machine language used on their computer platform called Hack. Section 4.2.1 says this about Hack:

The Hack computer is a von Neumann platform. It is a 16-bit machine, consisting of a CPU, two separate memory modules serving as instruction memory and data memory, and two memory-mapped I/O devices: a screen and a keyboard.

The CPU can only execute programs that reside in the instruction memory. The instruction memory is a read-only device, and programs are loaded into it using some exogenous means.

With that distinction between instruction memory and data memory, is it really a von Neumann architecture? According to my understanding of the difference between von Neumann and Harvard, that description sounds much more like a Harvard architecture.

Computing continued fraction

I want to build this infinite continued fraction

$ $ F_{n}(x)= \frac{1}{1-x\frac{(n+1)^2}{4(n+1)^2-1}F_{n+1}(x)} $ $

which gives for $ n=0$

$ $ F_{0}(x)=\dfrac{1}{1-\dfrac{(1/3)x}{1-\dfrac{(4/15)x}{1-\dfrac{(9/35)x}{1-\ddots}}}}$ $ I took inspiration from this Post (@Michael E2), the problem is that when I transform it as a list representation

{b0,{a1, b1},{a2, b2},...}  Clear[F2,iF2];  iF2[0]=0; iF2[1]={1,1}; iF2[2]={-x/3,1}; iF2[n_]:={-x(n+1)^2/(4(n+1)^2-1),1}; F2[n_]:=Table[iF2[k],{k,0,n}]; 

I can’t find all the terms, so I find for 5 terms

Block[{n=5},F2[n]] (*{0,{1,1},{-x/3,1},{-16x/63,1},{-25x/99,1},{-36x/143,1}}*) 

it lacks after {$ -x/3,1$ } the terms {$ -4x/15,1$ } and {$ -9x/35,1$ }

What is wrong please?

Does any algorithm exist for computing the state of a non-trivial cellular automaton after an arbitrary number of time steps?

If have a cellular automaton, can I see the state of the board after something like $ 10^{{10}^{10}}$ time steps? For trivial cases, this is possible – for example, a cellular automaton where the board repeats after some finite period.

But are there any cellular automata (or perhaps even similar computational structures) that display chaotic behavior but can also be very quickly evaluated to extreme time steps into the future?

Inverse Matrix not computing

Hi just wondering why my Inverse of matrix will not compute? I don’t believe its wrapped from //MatrixForm because I copied it by hand into a new document and it still would not work.

Any ides?enter image description here

m2 = {{1, 1, 1, 1}, {0, Exp[L], Exp[0*L], Exp[0*L]}, {-1, 1, 0, 0}, {0, Exp[L], 0*Exp[-w2*L], 0*Exp[w2*L]}} Y = {0, f, 0, 0} c = Inverse[m2].Y

Computing the sum of an infinite series as a variant of a geometric series

I came across the following series when computing the covariance of a transform of a bivariate Gaussian random vector via Hermite polynomials and Mehler’s expansion:

$ $ S = \sum_{n=1}^{\infty} \frac{\rho^n}{n^{1/6}} $ $ for $ \vert \rho \vert < 1$ . We know that $ S$ must be finite and satisfy $ $ S \le \rho (1-\rho)^{-1} $ $ since the original series is dominated by $ \sum_{n=1}^{\infty} \rho^n$ .

However, there is a catch if we use for $ S$ the upper bound $ \rho (1-\rho)^{-1}$ , which tends to $ \infty$ when $ \rho \to 1-$ . This happens when the two marginal random variables in the Gaussian vector are almost surely, positively linearly dependent (asymptotically).

So, the target is to obtain a good upper bound, much better than $ \rho (1-\rho)^{-1}$ when we restrict $ \rho$ to be away from $ 1$ , to reduce the effect of $ \rho \to 1-$ . In other words, let $ 1-\rho = \delta$ for some fixed $ \delta \in (0,1)$ , what is a better upper bound for $ S$ ?

Because of the scaling term $ n^{-1/6}$ that induces a divergent series $ \sum_{n=1}^{\infty} n^{-1/6}$ , probably not much improvement should be expected. I have Googled but did not find an illuminating technique for this. Any pointer or help is appreciated. Thank you.