Books on scientific computing, efficent NN inference, and matrix multipication

I’m trying to learn more about how inference, matrix multiplication, and scientific computing (primarily with tensors/matrices). I’m not sure what the classics here are or what good sources are. I’m primarily looking for books but classic texts of any kind are welcome (including both papers, blogs, and articles on real world implementations).

I’d like to gain an understanding of both how to implement algorithms like GEMM as efficiently as BLAS implementations do and also how to perform inference on neural networks efficiently. When I say "efficiency" I mean both latency and throughput as is classically meant but I also mean energy efficiency as well. Energy efficiency seems to be covered less however.

What are good references/books in this area?

trouble recovering rotation and translation from essential matrix

I am having trouble recovering rotation and translation from an essential matrix. I am constructing this matrix using the following equation: \begin{equation} E = R \left[t\right]_x \end{equation}

which is the equation listed on Wikipedia. With my calculated Essential matrix I am able to show the following relation holds: \begin{equation} \left( \hat x \right) E x = 0 \end{equation}

for the forty or so points I am randomly generating and projecting into coordinate frames. I decompose $ E$ using SVD then compute the 2 possible translations and the two possible rotations. These solutions differ significantly from the components I’m starting with.

I have pasted a simplified version of the problem I am struggling with below. Is there anything wrong with how I am recovering the these components?

import numpy as np   t = np.array([-0.08519122, -0.34015967, -0.93650086])  R = np.array([[ 0.5499506 , 0.28125727, -0.78641508],     [-0.6855271 , 0.68986729, -0.23267083],     [ 0.47708168, 0.66706632, 0.57220241]])  def cross(t):     return np.array([     [0, -t[2], t[1]],     [t[2], 0, -t[0]],     [-t[1], t[0], 0]])   E =   u, _, vh = np.linalg.svd(E, full_matrices=True)  W = np.array([ [ 0,-1, 0], [ 1, 0, 0], [ 0, 0, 1]])  Rs = [,] Ts = [u[:,2], -u[:,2]] 

Matrix Perception and Running Silent in Shadowrun 5e

Shadowrun 5e core rulebook states at page 235:

If you’re trying to find an icon that’s running silent (or if you’re running silent and someone’s looking for you), the first thing you need to do is have some idea that a hidden icon is out there. You can do this with a hit from Matrix Perception Test; asking if there are icons running silent in the vicinity (either in the same host or within 100 meters) can be a piece of information you learn with a hit. Once you know a silent running icon is in the vicinity, the next step is to actually find it. This is done through an Opposed Computer + Intuition [Data Processing] v. Logic + Sleaze Test. If you get more hits, you perceive the icon as normal; on a tie or more hits by the defender, it stays hidden and out of reach.

Note that if there are multiple silent running icons in the vicinity, you have to pick randomly which one you’re going to look at through the Opposed Test. Marks can’t run silent because they’re already pretty hidden, but all other Matrix objects can be switched to silent running by their owners.

At the same time – under Matrix Perception sidebar on page 235 it says:

When you take a Matrix Perception action, each hit can reveal one piece of information you ask of your gamemaster. Here’s a list of some of the things Matrix Perception can tell you. It’s not an exhaustive list, but it should give you a pretty good idea about how to use Matrix Perception: If you know at least one feature of an icon running silent, you can spot the icon (Running Silent, below). The marks on an icon, but not their owners

Now, questions. Imagine following situation – some decker, who had bought a hundred of RFID tags (1 nuyen each) and set them to running silent (or even stealth RFID tags, that are running silent by default (10 nuyen each)) and goes to perform some hacking. The target is protected by security decker.

Question 1. Security decker rolls his matrix perception to look for silent icons and gets at least one hit and finds out that there are running silent icons nearby. Does he get an exact number – there is 101 entity (100 silent RFID tags and one decker (who has, probably, more than one device, but they are likely to be panned and be represented as one icon) hiding nearby or just that there are hidden entities?

Question 2. Is there any way to focus on decker, not on RFID tags, when trying to reveal silent icons? I.e. does ‘persona’ counts as feature of icon?

Question 3. What counts as feature of icon?

Question 4. Decker used Hack on the Fly action feature and succesfully left a mark on device he is trying to hack. Security decker used Matrix Perception to check on device and scored one hit and can see decker’s mark. Can that mark be used to narrow search for that decker?

Parallel Matrix Manipulation: find eigenvalues and construct list

I’m having some trouble with the Parallel commands in Mathematica 12.1:

I need to construct a table where its entries are {M, Eigenvalues of X[M]}, where X is a square matrix of dimension N with N big (>3000) and M a parameter. Specifically, I do the following:

AbsoluteTiming[BSg1P = Table[M = m;      {M, #} & /@ (Eigenvalues[N[X]]), {m, -2, 2, 1}];] 

and I compare with

AbsoluteTiming[BSg1P = ParallelTable[M = m;      {M, #} & /@ (Eigenvalues[N[X]]), {m, -2, 2, 1}];] 

The computing time is similar for both cases: the difference is around 6 sec. for a total time of 300 sec., which makes no sense if the parallel evaluation is performed. Since I have 2 processors, I would expect half of the time or a considerable fraction for the computing duration.

Am I doing something wrong? Or is there something about parallelization that I don’t understand?

On the other hand, if I don’t want to use ParallelTable, is there a way to compute the eigenvalues of X[M] in a faster parallel form?


When would you use an edge list graph data structure instead of an adjacency list or adjacency matrix?

In what applications would you choose an edge list over an adjacency list or an adjacency matrix?

Sample Question, VisuAlgo: Which best graph DS(es) should you use to store a simple undirected graph with 200 vertices, 19900 edges, and the edges need to be sorted? Suppose your computer only has enough memory to store 40000 entries.

There are three choices: adjacency lists, adjacency matrix, and an edge list.

Edge lists are the correct answer here because sorting by weight is most efficient, but what are some other use cases?


RC-Code if I have a generator matrix for a specific code how do I get the distance to the dual code?

$ \mathcal{R} \mathcal{S}_{6, 3}$ and $ a_{i} \in \mathbb{F}_{11}$

G=\begin{pmatrix}1&1&1 &1&1&1\ 0&1&2 &3&4&5\ 0&1^2&2^2 &3^2&4^2&5^2 \end{pmatrix}

G=\begin{pmatrix} 1&1&1 &1&1&1\0&1&2 &3&4&5\ 0&1&4 &9&5&3 \end{pmatrix}

I now have to determine the distance to the dual code.

Any clue how to accomplish this?

Thanks for any help.

Delete rows or columns of matrix containing invalid elements, such that a maximum number of valid elements is kept

Originally posted in stack-overflow but was told to post here.

Context: I am doing a PCA on a MxN (N >> M) matrix with some invalid values located in the matrix. I cannot infer these values, so I need to remove all of them, which means I need to delete the whole corresponding row or column. Of course I want to keep the maximum amount of data. The invalid entries represent ~30% of data, but most of it is completly fill in a few lines, few of it is scattered in the rest of the matrix.

Some possible approches:

  • Similar to this problem , where I format my matrix such that valid data entries are equal to 1 and invalid entries to a huge negative number. However, all proposed solutions are of exponential complexity and my problem is simpler.

  • Computing the ratio (invalid data / valid data) for each row or column, and deleting the highest ratio(s). Recompute the ratios for the sub-matrix and remove the highest(s) ratios. (not sure how many lines or columns we can remove safely in one step), and so on until there is no invalid data left. It seems like an okay solution, but I am unsure it always gives the optimal solution.

My guess is that it is a standard data analysis problem, but surprisingly I could not find a solution online.

User defined function for creating Row only listing first column of matrix or first element of a vector

I cannot figure out why my rowNameValue[] is only listing the first part of a column or vector. Below is the code.

objectName = Function[Null, SymbolName[Unevaluated[#]], {HoldFirst}]; ClearAll[m, b] m = {{1, 0, -5}, {0, 1, 1}, {0, 0, 0}}; MatrixQ[m] b = {1, 4, 0}; VectorQ[b] rowNameValue[symbol_, name_ : Null] := Block[{id, fn},    id = If[Head[name] === String, name, objectName[symbol],       objectName[symbol]];    id = If[MatrixQ[symbol] || VectorQ[symbol], Style[id, Bold], id,       id];    fn = If[MatrixQ[symbol] || VectorQ[symbol], MatrixForm,       TraditionalForm, StandardForm];    {Row[{id, " \[Rule] "}, " "], Apply[fn, symbol]}    ]; dataIn[m_, b_] = Block[{}, Grid[{     rowNameValue[m, "m"],     rowNameValue[b, "b"]     }]] dataIn[m, b] 

Asymmetric Transition Probability Matrix with uniform stationary distribution

I am solving a discrete Markov chain problem. For this I need a Markov chain whose stationary distribution is uniform(or near to uniform distribution) and transition probability matrix is asymmetric.

[ Markov chains like Metropolis hasting has uniform stationary distribution but transition probability matrix is symmetric ]