Mathematica not computing Matrix problem, just returning multiplication expression

I’m doing some very simple matrix operations in Mathematica, but for some reason, the last operation I’m trying to evaluate is not returning the actual product, just shows the symbolic multiplication.

P = { {1, 2}, {3, 4}} /10;  im = {{1},{1}} in = {{1},{1}}  A = ArrayFlatten[ { {KroneckerProduct[in\[Transpose], IdentityMatrix[2]]}, {KroneckerProduct[IdentityMatrix[2], im\[Transpose]]} } ]  p = Flatten[P] // MatrixForm  A.p 

This last operation, $ A\cdot p$ is returning the following:

enter image description here

Why is that so?

Origin of “Cookie” in Computing?

"Cookies" are a user-facing computing construct. They are codified in many technical specifications, including the earliest reference in an HTTP spec, RFC 2109, published February 1997.

Many claim the use in HTTP followed from UNIX "magic cookies." Eric Raymond provided a definition of what a "magic cookie" is:

Something passed between routines or programs that enables the receiver to perform some operation; a capability ticket or opaque identifier […]

OK.

But why did the UNIX community start using the phrase "cookie" to begin with? Is it because you put cookies into a jar, and take them out? When did this whole thing begin? Does anyone have a citation of the first usage?

When computing Monotone Polygon Triangulation, how do I store the formed diagonals in the DCEL?

It’s my understanding that a DCEL have the following structs

public class Vertex{   public Point Coordinate;   public Edge IncidentEdge; }  public class Edge{   public Vertex Origin;   public Edge Twin;   public Face IncidentFace;   public Edge Next;   public Edge Prev; }  public class Face{   public Edge IncidentEdge;   public List<Edge> edges; } 

If I go about determining the diagonals based on the type of vertex I’m at, how would I store the diagonal that was formed. Vertices can only store one incident edge. If I create a diagonal, any given vertex will have an additional incident edge. Do I just ignore this and just fix the half-edge pointers?

Books on scientific computing, efficent NN inference, and matrix multipication

I’m trying to learn more about how inference, matrix multiplication, and scientific computing (primarily with tensors/matrices). I’m not sure what the classics here are or what good sources are. I’m primarily looking for books but classic texts of any kind are welcome (including both papers, blogs, and articles on real world implementations).

I’d like to gain an understanding of both how to implement algorithms like GEMM as efficiently as BLAS implementations do and also how to perform inference on neural networks efficiently. When I say "efficiency" I mean both latency and throughput as is classically meant but I also mean energy efficiency as well. Energy efficiency seems to be covered less however.

What are good references/books in this area?

Relations between deciding languages and computing functions in advice machines

I’m trying to understand implications of translating between functions and languages for P/Poly complexity. I’m not sure whether the following all makes sense. Giving it my best shot given my current understanding of the concepts. (I have a project in which I want to discuss Hava Siegelmann’s analog recurrent neural nets, which recognize languages in P/Poly, but I’d like to understand and be able to explain to others implications this has for computing functions.)

Suppose I want to use an advice Turing $ T_1$ machine to calculate a function from binary strings to binary strings $ f: {0,1}* \rightarrow {0,1}*$ . $ T_1$ will be a machine that can compute $ f$ in polynomial time given advice that is polynomial-size on the length of arguments $ s$ to $ f$ , i.e. $ f$ is in P/Poly. (Can I say this? I have seen P/Poly defined only for languages, but not for functions with arbitrary (natural number) values.)

Next suppose I want to treat $ f$ as defining a language $ L(f)$ , by encoding its arguments and corresponding values into strings, where $ L(f) = \{\langle s,f(s)\rangle\}$ and $ \langle\cdot,\cdot\rangle$ encodes $ s$ and $ f(s)$ into a single string.

For an advice machine $ T_2$ that decides this language, the inputs are of length $ n = |\langle s,f(s)\rangle|$ , so the relevant advice for such an input will be the advice for $ n$ .


Question 1: If $ T_1$ can return the result $ f(s)$ in polynomial time, must there be a machine $ T_2$ that decides $ \{\langle s,f(s)\rangle\}$ in polynomial time? I think the answer is yes. $ T_2$ can extract $ s$ from $ \{\langle s,f(s)\rangle\}$ , and then use $ T_1$ to calculate $ f(s)$ , and then encode $ s$ with $ f(s)$ and compare it with the original encoded string. Is that correct?


Question 2 (my real question): If we are given a machine $ T_2$ that can decide $ \{\langle s,f(s)\rangle\}$ in polynomial time, must there be a way to embed $ T_2$ in a machine $ T_3$ so that $ T_3$ can return $ f(s)$ in polynomial time?

I suppose that if $ T_2$ must include $ T_1$ , then the answer is of course yes. $ T_3$ just uses the capabilities of $ T_1$ embedded in $ T_2$ to calculate $ f(s)$ . But what if $ T_2$ decides $ L(f)$ some other way? Is that possible?

If we are given $ s$ , we know its length, but not the length of $ f(s)$ . So in order to use $ T_2$ to find $ f(s)$ , it seems there must be a sequential search through all strings $ s_f = \{\langle s,r\rangle\}$ for arbitrary $ r$ . (I’m assuming that $ f(s)$ is unbounded, but that $ f$ has a value for every $ s$ . So the search can take an arbitrary length of time, but $ f(s)$ will ultimately be found.)

One thought I have is that the search for a string $ s_f$ that encodes $ s$ with $ f(s)$ has time complexity that depends on the length of the result $ f(s)$ (plus $ |s|$ , but that would be swamped when $ f(s)$ is long).

So now the time complexity does not have to do with the length of the input, but only the length of $ f(s)$ . Maybe $ L(f)$ is in P/Poly if $ f$ is in P? (Still confused here.)

Thinking about these questions in terms of Boolean circuits has not helped.

Computing automaton for $L(A) / L(B)$ gives ones for $A,B$

I’m trying to figure out whether infinite language change the answer.

Show that the following language is decidable: $ $ L=\{\langle A,B \rangle : \text{$ A,B$ are DFAs, $ L(B)$ is finite, and $ L(A)/ L(B)=L(0^*1^*)$ }\}.$ $

(I am talking about right division.)

We know how to check whether the language of a DFA is finite, and given two DFAs, we know how to check whether their languages are equal. The algorithms I know to the above problems uses the DFA’s, so it is necessary having the DFA’s in order to decide those problems.

I’m trying to figure out whether $ |L(B)|=\infty$ changes the answer. To the best of my understanding, because $ |L(B)|<\infty$ , we can explicitly construct a DFA that accepts $ L(A)/ L(B)$ , whereas if $ L (B)=\infty$ all we know is about the existence of $ DFA$ that accepts $ L(A)/ L(B)$ .

However, even if $ L(B)$ is an infinite language, since there is a finite number of DFAs, one of which accepts $ L(A) / L(B)$ , I can certainly know that there is a Turing machine that decides the language $ L$ . Right?

Computing average access time

A computer has a cache memory and a main memory with the following features: 

– Memory cache access time: 4 ns – Main memory access time: 80 ns – The time needed to load a line in cache is 120 ns. – Write-through policy. If the hit ratio in this computer is 95 % and the 90% of the memory accesses are read operations, what is the average access time?

It seems a bit weird to me since I have read through this site calculate the effective (average) access time (E AT) of this system but I still do not know how to do it. I would appreciate any help. Thanks!

Proving injectivity for an algorithm computing a function between sets of different types of partitions [closed]

I am attempting to solve the following problem:

Let $ A$ be the set of partitions of $ n$ with elements $ (a_1, \dots, a_s)$ such that $ a_i > a_{i+1}+a_{i+2}$ for all $ i < s,$ taking $ a_{s+1} = 0.$ Define $ g_n = F_{n+2}-1$ and let $ B$ be the set of partitions of $ n$ as $ b_1 \ge \dots \ge b_s$ such that $ b_i \in \{g_n\}$ for all $ i,$ and if $ b_1 = g_k$ for some $ k,$ then $ g_1, \dots, g_k$ all appear as some $ b_i.$ Prove $ |A|=|B|.$

Attempt: Let $ e_i$ be the vector with $ 1$ at position $ i$ and $ 0$ elsewhere. If $ b_1 = g_k,$ let $ c=(c_k, \dots, c_1)$ count how many times $ g_i$ appears. We calculate $ f: B \to A$ as follows:

Let $ c=(c_k,\dots,c_1), a=(0,\dots,0).$ While $ c \ne 0,$ let $ d_1 > \dots > d_k$ be the indices such that $ c_{d_i} \ne 0.$ Replace $ c, a$ with $ c-(e_{d_1}+\dots+e_{d_k}), a+(g_{d_1} e_1 + \dots + g_{d_k} e_k)$ respectively. After the while loop ends, let $ f(b)=a.$

Let $ \sum a, \sum b, \sum c$ be the sum of the components of $ a, b, c$ respectively. Since $ \sum c$ decreases after every loop, the algorithm terminates and $ f(b)$ is well-defined. Since $ c_k g_k + \dots + c_1 g_1 + \sum a$ does not change after every iteration, is $ \sum b$ at the start and $ \sum a$ at the end, we have $ \sum f(b) = \sum b = n,$ so $ f(b)$ is also a partition of $ n.$ Now $ a = (g_k, \dots, g_1)$ after the first loop, which satisfies the condition $ g_i > g_{i-1}+g_{i-2}$ since $ g_i = F_{n+2}-1 = (F_{n+1}-1)+(F_n-1)+1 > g_{i-1}+g_{i-2}.$ Furthermore, after every iteration of the loop, the difference $ a_i – (a_{i-1}+a_{i-2})$ changes by $ 0, g_{d_j}-g_{d_{j-1}} > 0,$ or $ g_{d_j}-(g_{d_{j-1}}+g_{d_{j-2}}) > 0,$ so we have $ a_i > a_{i-1} + a_{i-2}$ at the end and hence $ f(b) \in A.$ Thus, $ f: B \to A$ is well-defined.

In order to prove the injectivity of $ f,$ it suffices to prove each loop iteration as a mapping $ (c,a) \to (c’,a’)$ is injective, which would imply the mapping $ (c,0) \to (0,a)$ that the while loop creates is injective. Indeed, if $ f(b_1) = f(b_2) = a$ with $ (c_1, 0), (c_2, 0)$ being sent to $ (0, f(b_1)) = (0,a), (0, f(b_2)) = (0,a)$ respectively, then we have $ (c_1, 0) = (c_2, 0) \Rightarrow c_1 = c_2 \Rightarrow b_1 = b_2.$

Suppose $ d_1 > \dots > d_i, f_1 > \dots > f_j$ are the non-zero indices of $ c_1, c_2$ respectively and $ c_1 – (e_{d_1}+\dots+e_{d_i}) = c_2 – (e_{f_1}+\dots+e_{f_j}), a_1+g_{d_1}e_1 + \dots+ g_{d_i} e_i = a_2 + g_{f_1} e_1 + \dots + g_{f_j} e_j.$ If $ x \ge 2$ is an entry of $ c_1,$ it decreases by $ 1,$ so the corresponding entry in $ c_2$ after $ c_2$ is modified is also $ x-1,$ which means it must’ve been $ (x-1)+1 = x$ before since $ x-1>0.$ Thus, if the values of two positions of $ c_1, c_2$ differ, one is $ 1$ and the other is $ 0.$ However, if $ c_1 = (1,0), a_1 = (3,1), c_2 = (0,1), a_2 = (4,1),$ then $ (a_1, c_1), (a_2, c_2)$ both get sent to $ ((5,1), (0,0)).$ I can rule out this specific example by arguing that one of the pairs is illegal and could not have come from any choice of initial $ c,$ but I have no idea on how to do this in general.

What should I do next in order to show $ f$ is injective? Furthermore, since the problem I’m trying to prove is correct, injectivity would imply $ f$ is secretly a bijection. But I have no clue on how to even start on the surjectivity of $ f,$ so I just constructed a similar algorithm for $ g: A \to B$ in the hopes of proving $ g$ is injective too. If I can show $ f$ is injective I will probably know how to show $ g$ is.

Here is an example of $ f, g$ in practice:

Let $ n = 41, b = (12, 7, 7, 4, 4, 2, 2, 2, 1) \Rightarrow c = (1, 2, 2, 3, 1).$

$ $ ((1, 2, 2, 3, 1), (0,0,0,0,0)) \to ((0, 1, 1, 2, 0), (12, 7, 4, 2, 1)) \to ((0, 0, 0, 1, 0), (19,11,6,2,1)) \to ((21,11,6,2,1),(0,0,0,0,0)),$ $ so $ f(b) = (21,11,6,2,1).$

Let $ a = (21, 11, 6, 2, 1).$

$ $ ((21,11,6,2,1),(0,0,0,0,0)) \to ((9,4,2,0,0), (1,1,1,1,1)) \to ((2,0,0,0,0),(1,2,2,2,1)) \to ((0,0,0,0,0),(1,2,2,3,1)),$ $ so $ g(a) = (12, 7, 7, 4, 4, 2, 2, 2, 1).$

Computing FIRST and FOLLOW

Given the following grammar with terminals $ VT=\{[,],a,b,c,+,-\}:$

$ S \rightarrow [SX]|a$

$ X \rightarrow +SY|Yb|\epsilon$

$ Y \rightarrow -SXc|\epsilon$

This should be the FIRST function:

$ first(S) = \{[,a\}$

$ first(X) = \{\epsilon,+,-,b\}$

$ first(Y) = \{\epsilon,-\}$

What would the FOLLOW function be?