Mathematica not computing Matrix problem, just returning multiplication expression

I’m doing some very simple matrix operations in Mathematica, but for some reason, the last operation I’m trying to evaluate is not returning the actual product, just shows the symbolic multiplication.

P = { {1, 2}, {3, 4}} /10;  im = {{1},{1}} in = {{1},{1}}  A = ArrayFlatten[ { {KroneckerProduct[in\[Transpose], IdentityMatrix[2]]}, {KroneckerProduct[IdentityMatrix[2], im\[Transpose]]} } ]  p = Flatten[P] // MatrixForm  A.p 

This last operation, $ A\cdot p$ is returning the following:

enter image description here

Why is that so?

Find position in array where element-wise multiplication with string of 1 and 0s results in max value

I have a sequence of 1s and 0s. For example: $ bits = [1, 0, 1, 1, 1, 0]$ . I also have an array of positive integers. For example $ arr = [12, 23, 4, 6, 8, 0, 24, 72]$ . I need to find the index, $ i$ , in $ arr$ of the leftmost element of $ bits$ such that

$ $ \sum_{j = i}^{i + \textrm{length of bits}}{bits[j – i] * arr[j]}$ $

is a maximum. Essentially I am maximizing the element-wise multiplication between the two sequences starting at index $ i$ .

I need to solve it in $ O(n\log n)$ or better, but I can only think of a way to do it in $ O(n^2)$ . I have a feeling prefix sums could be used but am not sure how.

Matrix multiplication over range in $O(n)$

Let $ M$ denote the time it takes to multiply 2 matrices, and $ Q$ denote the number of queries.

Is it possible to create a data structure with $ O((n+Q)M)$ pre-computation time, and can answer range matrix multiplication queries that satisfy $ l_{i-1}\le l_i$ and $ r_{i-1}\le r_i$ with an overall time complexity of $ O((n+Q)M)$ . I’ve been thinking a lot on how to manipulate 2 pointers to get the intended results, but haven’t come up with any approach yet. The matrices are not necessarily invertible.

The bit representation of the hashing multiplication method

In the picture below from CLRS, I fail to understand why exactly $ h(k)$ = the $ p$ highest-order bits of the lower w-bit half of the product.

For context, this is supposed to compute $ h(k) = \lfloor m (k A \; \text{mod} 1) \rfloor $

enter image description here


For further context, CLRS mentions the following, but I still don’t quite get why those $ p$ highest-order bits are the ones we are looking for.

enter image description here

Multiplication mod 2 without extra registers

For an arbitrary bitstring $ (x_1, x_2,\ldots, x_n)$ and an $ n\times n$ invertible binary matrix $ M$ (fixed ahead of time), I would like to construct a circuit $ C$ acting on these $ n$ bits whose output will be such a bitstring $ (y_1, y_2,\ldots, y_n)$ that: $ $ \begin{pmatrix} y_1 \ y_2 \ y_3 \ \ldots \ y_n \end{pmatrix} = M \begin{pmatrix} x_1 \ x_2 \ x_3 \ \ldots \ x_n \end{pmatrix} \bmod 2 \ , $ $ The extra registers are not allowed. The circuit $ C$ should only contain $ NOT$ and $ CNOT$ gates (where $ CNOT(x, y) = (x, x+y \bmod 2) $ ). The matrix $ M$ is such that it permits for a reversible calculation.

The lower bound is trivially given by $ O(n^2)$ operations. (That’s how you would usually multiply matrices, if you had access to the original values of registers all the time. The question, however, is inspired by quantum computation, where one cannot store the initial values, and extra qubits are expensive.)

A known fact from quantum information is that such circuit can be constructed with at most $ O(\exp(n))$ gates. The goal is to design it using a sub-exponential number of gates.

How to perform matrix multiplication in Mixing Columns step of AES?

I am studying AES and trying to implement it. But I am having difficulty understanding the Mixing Column step. In that step we have to perform matrix multiplication between the state matrix and another fixed matrix. Here is the example given in the material I am studying from: enter image description here

I am not getting the 03*2F part. How did it turn into (02*2F)xor2F? Is the material correct or does it have some mistake?

Time complexity of $O(n)$ loop which has a multiplication ($O(n^2)$) in it

Assume we know that the implementation for the multiplication operator for a language is known to be $ O(n^2)$ .

Given this pseudocode:

func wibble_wobble(List<Integer> input):     Integer constant = input.length;     return List<Integer> { item * constant foreach item in input }; 

Since the loop inside the new list initialization is $ O(n)$ , but the multiplication operation inside is $ O(n^2)$ , is this function considered to have a time complexity of $ O(n)$ , $ O(n^2)$ , or maybe even $ O(n*n^2)=O(n^3)$ ?

My gut instinct says it is $ O(n)$ because a change in input would not change the time complexity of the multiplication operation, and thus would not need to be considered, but I am not sure.