Does Returning rune imply Reload 0?

I was experimenting with building a Thor-like Human Fighter in Pathfinder 2e. I’m picking a mixture of feats that fit well for both melee and ranged, because Thor likes to throw his hammer, Mjolnir. I noticed the feat Double Shot, which reads

Requirements You are wielding a ranged weapon with reload 0. You shoot twice in blindingly fast succession. Make two Strikes, each against a separate target and with a –2 penalty. Both attacks count toward your multiple attack penalty, but the penalty doesn’t increase until after you’ve made both of them.

If I have a Returning (Light) Hammer, does returning imply Reload 0? Because returning reads

Usage: etched onto a thrown weapon When you make a thrown Strike with this weapon, it flies back to your hand after the Strike is complete. If your hands are full when the weapon returns, it falls to the ground in your space.

While Reload [x] reads

[The number x] indicates how many Interact actions it takes to reload such weapons. This can be 0 if drawing ammunition and firing the weapon are part of the same action.

So…

  • it returns following the Strike
  • there are no Interact actions involved in the Return
  • regardless if it was a successful or failed Strike

That seems equivalent to a Reload 0. Thus enabling "Thor" to throw his hammer multiple times per round.

So, to repeat, does a Returning rune imply Reload 0, particularly for the sake of taking the feat Double Shot?

As a side note, unfortunately I can’t find a feat combination that would allow him to "fly" by throwing his hammer and having him holding on to it.

Does this imply Hamiltonian path cannot be decided in nondeterministic logspace?

Suppose I nondeterministically walk around in a graph with n vertices.

When looking for a Hamiltonian path, at some point I’ve walked n/2 vertices.

There are (n choose n/2) different combinations of vertices I could have walked (meaning the unordered set, not the ordered walk).

Each of those states must be distinct from one another.

If not, then, depending on the remaining n/2 vertices, I would decide the wrong answer.

Therefore, midway through my processing, at n/2, I need (n choose n/2) different states. That is too big for logspace.

Therefore you cannot decide a Hamiltonian path by nondeterministically walking around.

Does this imply Hamiltonian path cannot be decided in nondeterministic logspace – at least by “walking around”?

Would a sparse NP-Complete language imply L = NP?

[1] Assume there exists a (sparse NP-Complete language).

Mahaney’s theorem is a theorem in computational complexity theory proven by Stephen Mahaney that states that if any sparse language is NP-Complete, then P=NP.

https://en.wikipedia.org/wiki/Mahaney%27s_theorem

[2] In other words, (sparse NP-Complete language) implies (P = NP).

[3] From [1] and [2] (P = NP).

[4] (P = NP) implies (P-complete = NP-complete).

[5] From [3] and [4] (P-complete = NP-complete).

[6] From [1] and [5] there exists a (sparse P-Complete language)

In 1999, Jin-Yi Cai and D. Sivakumar, building on work by Ogihara, showed that if there exists a sparse P-complete problem, then L = P.”

https://en.wikipedia.org/wiki/Sparse_language

[7] In other words, (sparse P-complete problem) implies (L = P).

[8] From [6] and [7] (L = P).

[9] From [3] and [8] (L = NP).

Would $\Sigma_i^P \neq \Pi_i^P$ imply that polynomial hierachy cannot collapse to the $i$-th level?

If $ \Sigma_i^P = \Pi_i^P$ , then it follows that the polynomial hierarchy collapses to the $ i$ -th level.

What about the case $ \Sigma_i^P \neq \Pi_i^P$ ? For example, consider the case of $ NP \neq coNP$ . As far as I understand, this would imply the polynomial hierarchy cannot collapse to the first level, since if $ PH =NP$ , then in particular, $ coNP \subseteq NP$ , which means $ NP = coNP$ . Can we expand this idea to proof the general case: $ \Sigma_i^P \neq \Pi_i^P$ implies $ PH$ cannot collapse to $ i$ -th level?

When does $\sqrt{a_1} + \ldots + \sqrt{a_k} \in K$ imply $\sqrt{a_1}, \ldots, \sqrt{a_k} \in K$?

Consider $ R = \mathbb{Z}[X_1, \ldots, X_k]$ , the polynomial ring in $ k$ variables over $ \mathbb{Z}$ , and let $ S = \mathbb{Z}[\sqrt{X_1}, \ldots, \sqrt{X_k}]$ . Then $ S/R$ is an integral extension of commutative ring; let $ f \in R[T]$ be the minimal polynomial of $ \alpha = \sqrt{X_1} + \ldots + \sqrt{X_k} \in S$ . One sees (for example by passing to the fraction fields and invoking Galois theory) that $ \deg(f) = 2^k$ .

Some examples of $ f$ for small values of $ k$ :

  • $ k = 1$ : $ f = T^2 – X_1$
  • $ k = 2$ : $ f = T^4 – 2(X_1 + X_2)T^2 – (X_1 – X_2)^2$
  • $ k = 3$ : $ f =T^8 + (-4(X_1 + X_2 + X_3))T^6 + (6(X_1^2 + X_2^2 + X_3^2) + 4(X_1X_2 + X_1X_3 + X_2X_3))T^4 + (-4(X_1^3 + X_2^3 + X_3^3) + 4(X_1^2X_2 + X_1^2X_3 + X_2^2X_1 + X_2^2X_3 + X_3^2X_1 + X_3^2X_2) – 40X_1X_2X_3)T^2 + (X_1^2 + X_2^2 + X_3^2 – 2(X_1X_2 + X_1X_3 + X_2X_3)^2$

Now consider some arbitrary field $ K$ of characteristic different from $ 2$ and some fixed homomorphism $ \varphi : R \to K$ . I want to know when the existence of a root of $ f(\varphi(X_1), \ldots, \varphi(X_k), T)$ in $ K$ implies that $ \varphi(X_1), \ldots, \varphi(X_K)$ are squares in $ K$ , i.e. that $ \varphi$ extends to a homomorphism $ S \to K$ . My understanding is that this will somehow ‘generically’ be the case by the fact that $ K(\sqrt{X_1}, \ldots, \sqrt{X_k}) = K(\sqrt{X_1} + \ldots + \sqrt{X_k})$ for any field $ K$ of characteristic different from $ 2$ , but one needs to exclude some values of $ t$ , possibly depending on the $ \varphi(X_i)$ . For example, if $ k = 2$ , $ \varphi : R \to K$ any morphism and $ t \in K$ such that $ f(\varphi(X_1), \varphi(X_2), t) = 0$ , then one can verify the identities $ $ \varphi(X_1) = \left(\frac{t^2 – \varphi(X_2) + \varphi(X_1)}{2t}\right)^2 \enspace \text{and} \enspace \varphi(X_2) = \left(\frac{t^2 – \varphi(X_1) + \varphi(X_1)}{2t}\right)^2 $ $ under the assumption that $ t \neq 0$ . Hence, any morphism $ \varphi : R \to K$ such that there exists a $ t \in K^\times$ with $ f(\varphi(X_1), \varphi(X_2), t) = 0$ extends to a morphism $ S \to K$ . The condition $ t \neq 0$ cannot be dropped: consider for example $ \varphi : R \to \mathbb{R}$ defined by $ \varphi(X_1) = \varphi(X_2) = -1$ , then $ 0$ is a root of $ f(-1, -1, T) = T^4 + 4T^2$ , but $ -1$ is not a square in $ \mathbb{R}$ .

How should I approach the problem of finding the values of $ t$ one has to exclude for larger values of $ k$ – say, for starters, for $ k = 3$ ? Any pointers towards books or articles on related subjects are very welcome.

Why does coNP⊆NP∖P imply that the polynomial hierarchy collapses?

I was looking for some information on 1-in-3 SAT and came across this paper, last updated 9 days ago, which claims that the Polynomial Time Hierarchy collapses “to the level above P=NP”. That’s quite exciting if you ask me, although I don’t have the toolkit to understand the actual proof. My question is about something much simpler.

Specifically, in the abstract on arxiv, the author writes,

Our proof shows the structure formerly known as the Polynomial Hierarchy collapses to the level above P=NP. That is, we show that coNP⊆NP∖P.

Could anyone help me understand why these two statements are equivalent?

Does Chowla’s conjecture on the Liouville function imply the Riemann Hypothesis

A paper see here on arXiv claims that Chowla’s conjecture (applied to the Liouville function instead of the Mobius function), i.e., that $ $ \lim_{N\rightarrow \infty} \sum_{n\leq N} \lambda(n+a_1) \lambda(n+a_2) \cdots \lambda(n+a_k)=o(N), $ $ implies the Riemann hypothesis. I have been unable to find any references to this claim after some research.

Is this claim new? Any pointers, references appreciated.