## Using the fundamental matrix for triangulation process?

Given the projection matrices from two cameras ($$P$$,$$P’$$) and a pair of corresponding points $$\{x_i,x’_i\}$$, it is straight forward to compute the triangulation using $$x_i=PX,x’_i=P’X$$.
I understand that a similar algebraic process can be used to find $$X$$ using only the fundamental matrix $$F$$, as it also contains $$[T]_xR$$, but I could not develop such equation.
Does someone know the derivation of such a process?

Posted on Categories proxies

## Cutting matrix in 2 congruent pieces [closed]

I have a matrix with 0 and 1. The 1 resembles a pattern , a shape. Cut the shape in 2 congruent pieces and make it in the matrix with 2 and 3. In c++ and keep it simple pls

Posted on Categories proxies

## Find if there is matrix that satisfying the following conditions

Given a matrix $$A_{n\times n} = \{a_{ij}\}$$ such that $$a_{ij}$$ is a non-negative number and given 2 vectors $$(r_1,r_2,…,r_n)$$ , $$(c_1,c_2,…,c_n)$$ such that $$r_i,c_i\in \mathbb{Z}$$ define an efficient algorithm that will determine if there’s a matrix $$B_{n\times n} = \{b_{ij}\}$$ , $$b_{ij} \in \mathbb{Z}$$ and

for every $$1\leq i \leq n \sum b_{ij} = r_i$$

for every $$1\leq j \leq n \sum b_{ij} = c_j$$

and

$$0 \leq b_{ij} \leq a_{ij}$$

Thought something with dynamic programming but didn’t manage to solve it.

Posted on Categories proxies

## Monopoly Matrix Coding

I am an IB student doing SL math. For my math IA i picked the topic of monopoly and the optimal way to play. However, in this IA there is a massive matrix that needs to be built to see how the long term possibilities of landing on a given square. I found someone’s code on this page https://arxiv.org/pdf/1410.1107.pdf, but I don’t know how to code it. I took a grade 11 course in computer science, but I barely know anything LOL. I used visual studio in this class, and I hope that I can use visual studio to code this. Can someone help? I have no idea if it’s possible. thanks guys!

## Boolean matrix / satisfiability problem [duplicate]

• How to enumerate minimal covers of a set 2 answers

Let $$M$$ be an $$m\times n$$ matrix with all elements in $$\{1,0\}$$, $$m >> n$$. Let $$\mathbf{v}_0, \ldots, \mathbf{v}_n$$ be the columns of $$M$$.

I want to find all sets of columns $$S = \{\mathbf{v}_{i_1}, \ldots, \mathbf{v}_{i_k}\}$$ so that for every row there is at least one column $$\mathbf{v}_{i_j}, \ldots, \mathbf{v}_{i_k}$$ that has a $$1$$ in that row, with the constraint that $$S$$ is minimal in the sense that deleting any element of $$S$$ means $$S$$ no longer meets these requirements.

Without the minimalness constraint, this is a trivial instance of (monotone) SAT – define a variable corresponding to each column of $$M$$, and just read the CNF clauses from the rows of $$M$$.

How can I approach the problem as described? I tried encoding the minimalness requirement as additional boolean constraints (which would make the problem regular SAT and I could use a SAT solver), but this gives $$n^m$$ additional clauses in CNF form, which is intractably large.

Posted on Categories proxies

## Find the minimal subset of rows of some matrix such that the sum of each column over this rows exceeds some threshold

Let $$A$$ be a an $$n\times m$$ real valued matrix. The problem is to find the minimal subset $$I$$ of rows (if there is any) such that the sum of each column $$j$$ over the corresponding rows exceeds some threshold $$t_j$$, i.e. $$\sum_{i\in I}A[i,j]>t_j$$ for all $$j\in\{1,\dots m\}$$.

Or, stated as optimization problem:

Let $$A\in\mathbb{R}^{n\times m}, t\in\mathbb{R}^m$$. Now solve \begin{align}\min_{\xi\in\{0,1\}^n}&\sum_{i=1}^n\xi_i\\text{s.t.}&\,A^\top\xi>t\,.\end{align}

Actually, i would need a solution only for $$m=2$$, but the general might be interesting too.

Posted on Categories proxies

## Matrix chain multiplication: Greedy approach

Edit; some suggested a thread in which the algorithm multiplies the 2 matrices with lowest values first. Mine is different: it divides by parenthesis the 2 matrices. And continues to the next section.

I have tried so many ways to disprove this one. This algorithm works like this: A= 5×2 B= 2×7 C= 7×3

First, find the lowest number in the lines / rows column. Then divide the sequence to 2: (A)(B•C) Then repeat the process for the 2 parts. Stop when you have 1 (or 2) matrices in the sequence. Is this algorithm optimal? It has to be better than N^3 (the usual algorithm)

Posted on Categories proxies

## Merging 4 matrices to one matrix

I am struggling with the task to merge four matrices as presented below. Since the matrices A-D contain more than just four entries it would be too complex to do it by hand. Is there a simple or clever way to get the result in Mathematica?

A = {{A11,A12}, {A21,A22}}

B = {{B11,B12}, {B21,B22}}

C = {{C11,C12}, {C21,C22}}

D = {{D11,D12}, {D21,D22}}

E = {{A11,B11,A12,B12}, {C11,D11,C12,D12}, {A21,B21,A22,B22}, {C21,D21,C22,D22}}

Max

## How to solve a matrix PDE and stop solving when solution becomes singular?

My question consists of two parts:

1. How do I get mathematica to solve a PDE Matrix system and plot the result? See below for the PDE matrix system. (By plot the result I mean plot the region where the solution $$\Theta$$ is nonsingular.)
2. How do I stop the integration when the solution matrix becomes singular? I know that away from $$(x_1,x_2)=(0,0)$$ the solution matrix $$\Theta$$ will become singular how do I stop Mathematica from trying to solve pass this point?

The PDE matrix system I am trying to solve is \begin{align} \dot{\Theta}(x_1,x_2)+A&=\lambda \Theta(x_1,x_2) &\quad \text{Equation}\ \Theta(0,0)&=\begin{pmatrix} 1 & -\frac{1}{2} -\frac{\sqrt{3}}{2} \ 1 & -\frac{1}{2}-\frac{1}{2\sqrt{3}}+\frac{2}{\sqrt{3}} \end{pmatrix} &\quad \text{Initial condition} \end{align} I have specified the values of $$\dot{\Theta}(x_1,x_2),A,\lambda$$ in the block below:

(* Definitions *) A = {{-(x1^2 - 1), -2 x2 x1 - 1}, {1, 0}} lambda = {{1/2, -Sqrt/2}, {Sqrt/2, 1/2}} (*Value of theta for x1=x2=0 *) ThetaInit = {{1, -1/2 - Sqrt/2}, {1, -1/2 - 1/(2 Sqrt) +      2/Sqrt}} (*Derative of Theta in terms of t. Note \ \frac{dtheta}{dt}=x1'(t)\frac{\partial theta}{\partial x1}+x2'(t) \ \frac{\partial theta}{\partial x2} *) ThetaDot = ( -(x1^2 - 1) x2 - x1) D[Theta[x1, x2], x1] +    x2 D[Theta[x1, x2], x2] 

Notes