What is the time complexity of determining whether a solution $x$ exists to $x^k \equiv c \pmod{N}$ if we know the factorization of $N$?

Suppose we are given an integer $ c$ and positive integers $ k, N$ , with no further assumptions on relationships between these numbers. We are also given the prime factorization of $ N$ . These inputs are written in binary. What is the best known time complexity for determining whether there exists an integer $ x$ such that $ x^k \equiv c \pmod{N}$ ?

We are given the prime factorization of $ N$ because this problem is thought to be hard on classical computers even for k = 2 if we do not know the factorization of $ N$ .

This question was inspired by this answer, where D.W. stated that the nonexistence of a solution to $ x^3 \equiv 5 \pmod{7}$ can be checked by computing the modular exponentiation for $ x = 0,1,2,3,4,5,6$ , but that if the exponent had been 2 instead of 3, we could have used quadratic reciprocity instead. This lead to my discovery that there are a large number of other reciprocity laws, such as cubic reciprocity, quartic reciprocity, octic reciprocity, etc. with their own Wikipedia pages.

Given $L$ and $D$ find $X, \text { such that } X * 10^L + D \equiv 0 \mod M$

Given $ L$ and $ D$ find $ X, \text { such that } X * 10^L + D \equiv 0 \mod M$ . Integer $ M$ is given and it is the same for all calculations however we need to solve for $ X$ for more different numbers. One important thing that we know is that $ \gcd(M, 10) = 1$ .

I rewrited the equation in this type: $ X * 10^L \equiv M – D \mod M$ . If $ M$ was prime number we could just multiply $ M-D$ by $ (10^L)^{M-2}$ . However $ M$ might be arbitrary integer. How can we use the fact that $ \gcd(M, 10) = 1$

What is the difference between “$=$” and “$\equiv$”?

I was recently thinking about some of my past math classes, and depending on the context I recall my professors would sometimes use the “$ \equiv$ ” symbol in places where I’d feel “$ =$ ” to be more appropriate. For example, since this would often be the case in my classes on differential equations and Fourier series, we would have (for $ n \in \Bbb N, k \in \Bbb Z$ )

$ $ (-1)^{2n+1} \equiv -1$ $ $ $ \sin(k\pi) \equiv 0$ $

Is there a particular reason in this context why we would say “$ \equiv$ ” instead of “$ =$ “? The latter feels more natural in this context, which makes me think that there’s some reason my professors would use the former.

I’m familiar with the notion of the “$ \equiv$ ” symbol in the context of, say, elementary number theory (specifically modular arithmetic) where we might say

$ $ 10 \equiv 1 \pmod 3$ $

which isn’t saying “$ 10$ equals $ 1$ “, just that “$ 10$ is like $ 1$ in this context.” But that doesn’t seem to fit the case as with the first two statements – because I don’t believe it is that $ (-1)^{2n+1}$ is like $ -1$ , or that $ \sin(k \pi)$ is like $ 0$ , they are $ -1$ and $ 0$ respectively.

Am I just mistaken on this latter fact? Is there something I’m missing? What, precisely, is the difference between the two notations?

Prove that $3^{30} \equiv 1 + 17 \cdot 31 \pmod{31^{2}}$.

I guess this problem is easy, but I cannot solve it.

Prove that $ 3^{30} \equiv 1 + 17 \cdot 31 \pmod{31^{2}}$

Of course, I can solve the above problem by direct calculation, but I wanna know smarter solution.

I did the following calculation for example, but I was not able to solve this problem.

By Fermat’s little theorem,
$ 3^{30} \equiv 1 + 17 \cdot 31 \pmod{31}.$

$ 1 + 17 \cdot 31 \equiv (1 + 31)^{17} \equiv 32^{17} \equiv 2^{85}\pmod{31^2}$

exist $x$ such that $x^k \equiv m$ mod $(p_1\cdot p_2) \Leftrightarrow $ exists $x_1,x_2$ : $x_1^k\equiv m(p_1)$ and $x_2^k\equiv m(p_2)$

exist $ x$ such that $ x^k \equiv m$ mod $ (p_1\cdot p_2) \Leftrightarrow $ exists $ x_1,x_2$ : $ x_1^k\equiv m(p_1)$ and $ x_2^k\equiv m(p_2)$

A first approach I took, was to use $ y\equiv m(p_1) , y\equiv m(p_2) \Leftrightarrow y\equiv m(p_1\cdot p_2)$ , then by assigning $ x^k=y$ the problem comes to find whether $ y$ has a $ k$ -order root in $ U(\mathbb{Z}_{p_1\cdot p_2})$ . How ever it doesn’t seem to simplify the problem.

A second approach I took was to use the fact which derived from CRT , that $ U(\mathbb{Z}_{p_1 \cdot p_2}) \cong U(\mathbb{Z}_{p_1}) \times U(\mathbb{Z}_{p_2}) $ , In $ U(z_{p_i})$ which are cyclic groups, there is a solution for $ x^k \equiv m(p_i) \Leftrightarrow m^{\frac{p_1-1}{gcd(k,p_1)}}=1 (p_i)$ . So assuming $ gcd(k,p_1) = 1$ there are solutions for the equations $ x_1,x_2$ . But I am struggling to show that $ \pi^{-1}(x_1,x_2)$ (when $ \pi$ is the isomorphism from CRT), is a solution for $ x^k \equiv m (p_1p_2)$ .

So in case my second approach is correct, I would be glad for some help with showing $ \pi^{-1}(x_1,x_2)$ is a solution, and also in case $ x$ is a solution mod $ (p_1p_2)$ then $ \pi_1(x) ,\pi_2(x)$ are solutions mod $ p_1$ , $ p_2$ respectively.

Also other approaches or ideas would be appreciated.

related question:

If $ x \equiv a \pmod {p_1}$ and $ x\equiv a \pmod{p_2}$ , then is it true that $ x\equiv a \pmod{p_1p_2} ?$