## What is an efficient way to get a look-at direction from either a quaternion or a transformation matrix?

So, I have an object in my custom engine (C++), with a column-major transform in world space. I’m using a package that takes a look-at direction as an input. What’s the most efficient way to get a look-at direction from this transform? Do I extract the rotation matrix? Do I try to extract a quaternion?

## Creating a Matrix from Integrals

I have a table of integrals that I want to put in an nxn matrix. I tried doing it in the following way

phix[x_, n_] := Exp[-n \[Alpha] x^2/2] phiy[y_, m_] := Exp[-m \[Beta] y^2/2] const = {List[     Integrate[      x^2 y^2 phix[x, n1] phix[x, n2] phiy[y, m1] phiy[y, m2], {x, 0,        Infinity}, {y, 0, Infinity}], {n1, 1, 3}, {n2, 1, 3}, {m1, 1,       3}, {m2, 1, 3}]} // MatrixForm 

but what I get as output is the following, instead of the matrix form.

I also tried to use Table instead of list, but still don’t get the output in the matrix form. I need the output as a matrix because I would like to calculate the e-values and e-vectors.

Any help would be greatly appreciated.

## Creating a block matrix from arrays of blocks

I am trying to generate a matrix from square blocks. Effectively, I have a $$n×n$$ matrix polynomial $$P(l)$$, the $$qth$$ derivative of $$P(l)$$ with respect to $$l$$, which is denoted by $$P^{(q)} (l)$$, and a block of zeroes, which I’ll just call $$0$$. I have some integer $$k$$ such that if $$k=1$$ then I am generating the matrix

$$R= \begin{pmatrix} P(l) \end{pmatrix}$$

If $$k=2$$ then I should generate

$$R = \begin{pmatrix} P(l) & 0 \ \frac{1}{1!} P^{(1)}(l) & P(l) \end{pmatrix}$$

If $$k=3$$ then

$$R = \begin{pmatrix} P(l) & 0 & 0 \ \frac{1}{1!} P^{(1)}(l) & P(l) & 0 \ \frac{1}{2!} P^{(2)}(l) & \frac{1}{1!} P^{(1)}(l) & P(l) \end{pmatrix}$$

and so forth. Generally,

$$R = \begin{pmatrix} P(l) & 0 & \cdots & 0 & 0 \ \frac{1}{1!} P^{(1)}(l) & P(l) & \cdots & 0 & 0 \ \frac{1}{2!} P^{(2)}(l) & \frac{1}{1!} P^{(1)}(l) & \cdots & 0 & 0 \ \vdots & \vdots & \ddots & \vdots & \vdots \ \frac{1}{(k-1)!} P^{(k-1)}(l) & \frac{1}{(k-2)!} P^{(k-2)}(l) & \cdots & \frac{1}{1!} P^{(1)}(l) & P(l) \end{pmatrix}$$

is an $$nk×nk$$ matrix.

I prefer a simple and understandable way and for that I thought to start with a zero matrix $$R$$ of dimensions $$nk×nk$$ and then with two "for" loops to full the initial zero matrix, putting the corresponding derivative which is needed. I’m not sure in what should go as my statement in "for" loops. I found other questions which were similar but more complicated and specific. Any help appreciated, thank you.

## Given a row sum vector and a column sum vector, determine if they can form a boolean matrix

For example, for a boolean matrix of size $$3×4$$, the row sum vector $$R = (3, 3, 0, 0)$$ and the column sum vector $$C = (2, 2, 2)$$ form a match because I can construct the boolean matrix:

$$\begin{matrix} & \begin{bmatrix} 1 & 1 & 0 & 0\ 1 & 1 & 0 & 0\ 1 & 1 & 0 & 0 \end{bmatrix} & \begin{pmatrix} 2\2\2 \end{pmatrix} = C \ R = &\begin{pmatrix} 2 & 2 & 0 & 0 \end{pmatrix} \end{matrix}$$

However, the column vector $$C’ = (4, 1, 1)$$ doesn’t form a match with $$R$$.

So given two vectors whose values are sorted in descending order $$R_{1, w}$$ and $$C_{h, 1}$$, and whose accumulated sum is the same, $$T = \sum_jR[1, j] = \sum_iC[i, 1]$$, how can I polynomically check if $$R$$ and $$C$$ form a matching because I can form a matrix $$M_{h,w}$$ having $$R$$ and $$C$$ as row and column sum vectors?

More specifically, in case it can help to make the check algorithm faster, in my specific case, R and C has the following properties:

• $$h \leq w$$
• The number of positive values of $$R$$ and $$C$$ is $$> w$$. For example, $$R$$, in the example, has two positive values and $$C$$ has three positive values, and it happens that $$2 + 3 > w = 4$$.

## Regression coefficients of a matrix

Assume, we have a matrix X with the shape of (1000,20), how to generate the regression coefficients of it ?

## Books on scientific computing, efficent NN inference, and matrix multipication

I’m trying to learn more about how inference, matrix multiplication, and scientific computing (primarily with tensors/matrices). I’m not sure what the classics here are or what good sources are. I’m primarily looking for books but classic texts of any kind are welcome (including both papers, blogs, and articles on real world implementations).

I’d like to gain an understanding of both how to implement algorithms like GEMM as efficiently as BLAS implementations do and also how to perform inference on neural networks efficiently. When I say "efficiency" I mean both latency and throughput as is classically meant but I also mean energy efficiency as well. Energy efficiency seems to be covered less however.

What are good references/books in this area?

## trouble recovering rotation and translation from essential matrix

I am having trouble recovering rotation and translation from an essential matrix. I am constructing this matrix using the following equation: $$$$E = R \left[t\right]_x$$$$

which is the equation listed on Wikipedia. With my calculated Essential matrix I am able to show the following relation holds: $$$$\left( \hat x \right) E x = 0$$$$

for the forty or so points I am randomly generating and projecting into coordinate frames. I decompose $$E$$ using SVD then compute the 2 possible translations and the two possible rotations. These solutions differ significantly from the components I’m starting with.

I have pasted a simplified version of the problem I am struggling with below. Is there anything wrong with how I am recovering the these components?

import numpy as np   t = np.array([-0.08519122, -0.34015967, -0.93650086])  R = np.array([[ 0.5499506 , 0.28125727, -0.78641508],     [-0.6855271 , 0.68986729, -0.23267083],     [ 0.47708168, 0.66706632, 0.57220241]])  def cross(t):     return np.array([     [0, -t[2], t[1]],     [t[2], 0, -t[0]],     [-t[1], t[0], 0]])   E = R.dot(cross(t))   u, _, vh = np.linalg.svd(E, full_matrices=True)  W = np.array([ [ 0,-1, 0], [ 1, 0, 0], [ 0, 0, 1]])  Rs = [u.dot(W.dot(vh.T)), u.dot(W.T.dot(vh.T))] Ts = [u[:,2], -u[:,2]] 

## Why can we not obtain a unique solution for orientation from Essential Matrix?

In computer vision, why cannot we obtain a unique solution for the projection matrix given an Essential Matrix. Why is it said that it can only be obtained upto ‘scale’ and what does this mean?

## Matrix Perception and Running Silent in Shadowrun 5e

Shadowrun 5e core rulebook states at page 235:

If you’re trying to find an icon that’s running silent (or if you’re running silent and someone’s looking for you), the first thing you need to do is have some idea that a hidden icon is out there. You can do this with a hit from Matrix Perception Test; asking if there are icons running silent in the vicinity (either in the same host or within 100 meters) can be a piece of information you learn with a hit. Once you know a silent running icon is in the vicinity, the next step is to actually find it. This is done through an Opposed Computer + Intuition [Data Processing] v. Logic + Sleaze Test. If you get more hits, you perceive the icon as normal; on a tie or more hits by the defender, it stays hidden and out of reach.

Note that if there are multiple silent running icons in the vicinity, you have to pick randomly which one you’re going to look at through the Opposed Test. Marks can’t run silent because they’re already pretty hidden, but all other Matrix objects can be switched to silent running by their owners.

At the same time – under Matrix Perception sidebar on page 235 it says:

When you take a Matrix Perception action, each hit can reveal one piece of information you ask of your gamemaster. Here’s a list of some of the things Matrix Perception can tell you. It’s not an exhaustive list, but it should give you a pretty good idea about how to use Matrix Perception: If you know at least one feature of an icon running silent, you can spot the icon (Running Silent, below). The marks on an icon, but not their owners

Now, questions. Imagine following situation – some decker, who had bought a hundred of RFID tags (1 nuyen each) and set them to running silent (or even stealth RFID tags, that are running silent by default (10 nuyen each)) and goes to perform some hacking. The target is protected by security decker.

Question 1. Security decker rolls his matrix perception to look for silent icons and gets at least one hit and finds out that there are running silent icons nearby. Does he get an exact number – there is 101 entity (100 silent RFID tags and one decker (who has, probably, more than one device, but they are likely to be panned and be represented as one icon) hiding nearby or just that there are hidden entities?

Question 2. Is there any way to focus on decker, not on RFID tags, when trying to reveal silent icons? I.e. does ‘persona’ counts as feature of icon?

Question 3. What counts as feature of icon?

Question 4. Decker used Hack on the Fly action feature and succesfully left a mark on device he is trying to hack. Security decker used Matrix Perception to check on device and scored one hit and can see decker’s mark. Can that mark be used to narrow search for that decker?

## Parallel Matrix Manipulation: find eigenvalues and construct list

I’m having some trouble with the Parallel commands in Mathematica 12.1:

I need to construct a table where its entries are {M, Eigenvalues of X[M]}, where X is a square matrix of dimension N with N big (>3000) and M a parameter. Specifically, I do the following:

AbsoluteTiming[BSg1P = Table[M = m;      {M, #} & /@ (Eigenvalues[N[X]]), {m, -2, 2, 1}];] 

and I compare with

AbsoluteTiming[BSg1P = ParallelTable[M = m;      {M, #} & /@ (Eigenvalues[N[X]]), {m, -2, 2, 1}];] 

The computing time is similar for both cases: the difference is around 6 sec. for a total time of 300 sec., which makes no sense if the parallel evaluation is performed. Since I have 2 processors, I would expect half of the time or a considerable fraction for the computing duration.

Am I doing something wrong? Or is there something about parallelization that I don’t understand?

On the other hand, if I don’t want to use ParallelTable, is there a way to compute the eigenvalues of X[M] in a faster parallel form?

Thanks.