Conjectured primality test for numbers of the form $N=4 \cdot 3^n-1$

This is a repost of this question.

Can you provide proof or counterexample for the claim given below?

Inspired by Lucas-Lehmer primality test I have formulated the following claim:

Let $ P_m(x)=2^{-m}\cdot((x-\sqrt{x^2-4})^m+(x+\sqrt{x^2-4})^m)$ . Let $ N= 4 \cdot 3^{n}-1 $ where $ n\ge3$ . Let $ S_i=S_{i-1}^3-3 S_{i-1}$ with $ S_0=P_9(6)$ . Then $ N$ is prime if and only if $ S_{n-2} \equiv 0 \pmod{N}$ .

You can run this test here .

Numbers $ n$ such that $ 4 \cdot 3^n-1$ is prime can be found here .

I was searching for counterexample using the following PARI/GP code:

CE431(n1,n2)= { for(n=n1,n2, N=4*3^n-1; S=2*polchebyshev(9,1,3); ctr=1; while(ctr<=n-2, S=Mod(2*polchebyshev(3,1,S/2),N); ctr+=1); if(S==0 && !ispseudoprime(N),print("n="n))) } 


Partial answer can be found here.

Is there a point $H$ such that $\frac{AH \cdot DM}{HD^2} = \frac{BH \cdot EN}{HE^2} = \frac{CH \cdot FP}{HF^2}$?

enter image description here

$ H$ is a point in non-isoceles triangle $ \triangle ABC$ . The intersections of $ AH$ and $ BC$ , $ BH$ and $ CA$ , $ CH$ and $ AB$ are respectively $ D$ , $ E$ , $ F$ . $ AD$ , $ BE$ and $ CF$ cuts $ (A, B, C)$ respectively at $ M$ , $ N$ and $ P$ . Is there a point $ H$ such that the following equality is correct? $ $ \large \frac{AH \cdot DM}{HD^2} = \frac{BH \cdot EN}{HE^2} = \frac{CH \cdot FP}{HF^2}$ $

  • If there is not, prove why.

  • If there is, illustrate how to put down point $ H$ .

Of course, point $ H$ should be one of the triangle centres identified in the Encyclopedia of Triangle Centers. But I don’t which one it is.

Prove that $\sum_{cyc}\dfrac{a}{b^2} \ge 3 \cdot \sum_{cyc}\dfrac{1}{a^2}$.

$ a$ , $ b$ and $ c$ are three positives such that $ \dfrac{1}{a} + \dfrac{1}{b} + \dfrac{1}{c} = 1$ . Prove that $ $ \large \dfrac{a}{b^2} + \dfrac{b}{c^2} + \dfrac{c}{a^2} \ge 3 \cdot \left(\dfrac{1}{a^2} + \dfrac{1}{b^2} + \dfrac{1}{c^2}\right)$ $

Here’s what I did.

We have that $ $ \left(\dfrac{a}{b^2} + \dfrac{b}{c^2} + \dfrac{c}{a^2}\right)\left(\dfrac{1}{b} + \dfrac{1}{c} + \dfrac{1}{a}\right) \ge \left(\sqrt{\dfrac{a}{b^3}} + \sqrt{\dfrac{b}{c^3}} + \sqrt{\dfrac{c}{a^3}}\right)^2$ $

But because of $ \dfrac{1}{a} + \dfrac{1}{b} + \dfrac{1}{c} = 1$ .

$ $ \implies \dfrac{a}{b^2} + \dfrac{b}{c^2} + \dfrac{c}{a^2} \ge 3 \cdot \left(\dfrac{1}{b}\sqrt{\dfrac{a}{c^3}} + \dfrac{1}{c}\sqrt{\dfrac{b}{a^2}} + \dfrac{1}{a}\sqrt{\dfrac{c}{b^2}}\right)$ $

And I am stuck, I can’t think anymore.

Stuck in proof for $AM[2] = BP \cdot NP$

I am trying to solve this problem from Arora, Barak Exe 8.3 .

For showing $ BP \cdot NP \subset AM[2]$ , I have the following

Since $ BP.NP = \{L | L \leq_R 3SAT\},\ \exists \text{ PTM M st. } \Pr [L(x) = 3SAT(M(x))] \geq 2/3$

  1. Arthur sends a random string $ r$ to Merlin

  2. Merlin uses $ r$ to run $ PTM \ M(x,r)$ and sends it’s output to Arthur

  3. Arthur checks if the output from Merlin satisfies $ 3SAT$ and accepts accordingly

I am not sure if this is right. I am not sure how to proceed to prove the containment in the opposite direction.

If $\nabla \cdot \vec{F} = 0$ show that $\vec{F}=\nabla \times \int_{0}^{1} \vec{F}(tx,ty,tz)\times(tx,ty,tz)dt$.

Suppose $ \vec{F}$ is a vector field on $ \mathbb{R}^3$ and $ \nabla \cdot \vec{F} = 0$ . Prove that:

$ \vec{F}=\nabla \times \int_{0}^{1} \vec{F}(tx,ty,tz)\times(tx,ty,tz)dt$ .

Tried doing it, but can’t seem to get it exactly right. Can anyone give an solution with working – naturally just doing 1st component should suffice.

Babilonic notation to decimal notation. Example $1;12 \cdot 15$

I’m currently working in a program that convert numbers in babilonic notation into decimal numbers. The problem I have is that the example and requirements described by the teacher deliver numbers in the following format

$ $ 1;12 \cdot 15$ $

That would be a number on its “babilonic” structure. The result after some operations that I trully don’t know and were shown by the teacher really fast seems like $ 72,25$ in decimal notation.

That was the example provided and I’m not too clear about it. I’ve found something similar in Wikipedia referring to calculation of irrational numbers starting from a sexagesimal structure similar to the one provided but I’ve found is not the same.

I hope somebody has any information about babilonic numbers and the notation provided because further than Wikipedia I haven’t found something closer to my problem, any hint or help will be really appreciated.

Find compact formula for $B(x)$ such that $ A(x) = P(x) \cdot B(x) $ – generating functions

Let A(x) be generating function of number divides such that contains exactly one (but it can be multi taken) fraction $ 2$ , $ 3$ , $ 5$ .

Let P(x) be generating function of all possible number divides.

Find compact formula for $ B(x)$ such that

$ $ A(x) = P(x) \cdot B(x) $ $

My try

$ $ A(x) = (1+x^2+x^4+…) + (1+x^3+x^6 + … ) + (1+x^5+x^{10}+…) = \\sum_{k} ([k\mod 2 = 0] + [k\mod 3 = 0] + [k\mod 5 = 0])x^k $ $

Now $ P(x)$

$ $ P(x) = \frac{1}{(1-x)(1-x^2)(1-x^3)\cdot…} $ $

But how can I get a compact formula from these calculations? $ $ \text{factor }([k \mod 2 = 0] + [k \mod 3 = 0] + [k \mod 5 = 0]) \text{ makes a problem there}$ $

Why does $\vec{F(t)} \cdot \vec{v(t)} = 0$ lead to a circular motion?

here is a mathematical proof that any force $ F(t)$ , which affects a body, so that $ \vec{F(t)} \cdot \vec{v(t)} = 0$ , where $ v(t)$ is its velocity cannot change the amount of this velocity.

Further, there is stated that $ \vec{v(t)}$ itself cannot change, what I think is nonsense. Since:

$ \begin{pmatrix} 1 \ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \ 0 \end{pmatrix} \times t = \begin{pmatrix} 1 \ t \ 0 \end{pmatrix} $

But maybe I am just wrong. Now, I am further wondering in how far $ \vec{F(t)} \cdot \vec{v(t)} = 0$ leads to a circular motion and how to proof this using Netwons laws and calculus?

Why is a $\text{Row} \cdot \text{Row}$ matrix multiplication inconsistent?

I came upon the fact that if you defined matrix multiplication such that $ \underbrace{\begin{pmatrix} a&b&c \ a’&b’&c’ \ a”&b”&c” \end{pmatrix}}_A \begin{pmatrix} x&y&z \end{pmatrix} = \begin{pmatrix} X&Y&Z \end{pmatrix}$

for $ X = ax + by +cz$ and so on, and then defined

$ \begin{pmatrix} x&y&z \end{pmatrix} = \underbrace{\begin{pmatrix} \alpha&\beta&\gamma \ \alpha’&\beta’&\gamma’ \ \alpha”&\beta”&\gamma” \end{pmatrix}}_B \begin{pmatrix} \delta&\epsilon&\sigma \end{pmatrix}$

you’ll end up with the fact that the matrix $ AB$ is the same as defined by the normal convention of multiplying matrices.

(Page 20, Arthur Cayley’s A Memoir of the Theory of Matrices

So I tried doing the same process with two compound matrices (instead of one compound matrix and a row matrix as is done in the example above).

And I ran into a problem. If we defined a sort of $ \text{row} \cdot \text{row}$ multiplication such that the first Row of the first matrix multiplies with the rows of the second matrix to form the first row of the resultant matrix, the entire convention becomes inconsistent.

As you can see,

$ \begin{matrix} \tiny{R_1} \ \tiny{R_2} \end{matrix} \begin{pmatrix} a&b\a’&b’ \end{pmatrix} \cdot \begin{matrix} \tiny{r_1}\\tiny{r_2} \end{matrix} \begin{pmatrix} c&d\c’&d’ \end{pmatrix} = \begin{pmatrix} ac+bd & ac’+bd’\ a’c+b’d & a’c’+b’d’ \end{pmatrix}$

And then we defined the $ cd$ matrix as a product of two other matrices:

$ \begin{pmatrix} c&d\c’&d’ \end{pmatrix} = \begin{pmatrix} X&Y\X’&Y’ \end{pmatrix} \begin{pmatrix} x&y\x’&y’ \end{pmatrix}$

And then if we tried to find a $ lm$ matrix such that,

$ \begin{pmatrix} l&m\l’&m’ \end{pmatrix} \begin{pmatrix} x&y\x’&y’ \end{pmatrix} = \begin{pmatrix} a&b\a’&b’ \end{pmatrix} \begin{pmatrix} c&d\c’&d’ \end{pmatrix}$

You get this:

$ \begin{pmatrix} lx+my&lx’+my’\l’x+m’y&l’x’+ m’y’ \end{pmatrix} = \begin{pmatrix} aXx+aYy+bXx’ +bYy’ & aX’x+aY’y +bX’x’ +bY’y’\a’Xx+a’Yy+b’Xx’ +b’Yy’ & a’X’x+a’Y’y +b’X’x’ +b’Y’y’ \end{pmatrix} \begin{pmatrix} c&d\c’&d’ \end{pmatrix}$

Which, if I’m not wrong, is a nonsensical matrix equation since $ x’$ s and $ y’$ s appear in RHS in places where they can’t occur in LHS.

But I can’t find a satisfactory explanation of why that inconsistency occurs.

On the other hand, if I defined this “row $ \cdot$ row” matrix multiplication as $ r_1 \cdot (R_1 R_2 R_3)$ , i.e., the first row of the second matrix multiplied with the rows of the first matrix to form the first row of the resultant matrix, the method works as it should have. I end up at the matrix multiplication convention that is already in use.

And I can’t find a great insight to why $ r_1 \cdot (R_1 R_2 R_3)$ works but $ R_1 \cdot (r_1 r_2 r_3)$ , which looks more promising, gives inconsistent result.