## Can current quantum computers decide languages that Turing Machines cannot?

I am currently learning Computing Theory at university, and we were on the topic of Turing-Decidability, Recognizability, etc. Showing that a problem is undecidable with Turing machines due to reductions got me thinking about all the Quantum Computer craze.

As such my question is as follows

Does there exist some language L that is not decidable by a Turing Machine, but is decidable by a quantum computer?

My hunch is no, as from what I understand about Quantum Computers, you can simulate them (albeit slower) using a Turing Machine. However my knowledge of Quantum Computing is limited to a 1.5 hour lecture from Microsoft on YouTube, and I would assume there was a lot more in the way of recent developments that I am not knowledgeable enough to understand

## Relating decidable, undecidable, recognizable, co-recognizable, unrecognizable, countable and uncountable languages

I went through a lot of texts and came up with following diagram to summarize the relation between decidable, undecidable, recognizable, co-recognizable, unrecognizable, countable and uncountable. Am I correct with it?

Note that this makes unrecognizable proper subset of undecidable. Also countable and uncountable makes up whole space, that is, there is no language which is neither countable nor un-countable.

In tabular format:

## set difference of two non regular languages

Lets say we have $$L_1$$ and $$L_2$$ two non regular languages . is $$L_1$$\ $$L_2$$ is always non-regular languages?

## Use the pumping lemma for context free languages to prove L = {w#w | w \in {a,b}*} is not context free

I know the basics of using the pumping lemma for CFG to prove a language L is not context-free, however, the # symbol seems to be throwing me off or my understanding is not complete.

## Do compilers of high programming languages always compile them directly to machine code?

As an amateur Bash/JavaScript scripter who never wrote one sentence in Assembly, I ask:

Do compilers of high programming languages always compile them directly to machine code, or are there cases when a compiler of some high programming language compiles it to assembly (and then assembler will assemble input to machine code output)?

## Essential difference between Assembly languages to all other programming languages

I understand that any assembly programming language has so little abstraction, so that one who programms with it (OS creator, hardware driver creator, “hacker” and so on), would have to know the relevant CPU’s architecture pattern very well — unlike someone that programms in any “higher” programming language.

For me, this requirement to know the relevant CPU’s architecture pattern very well is the essential difference between assembly programming languages to the rest of programming languages, so we get:

• nonassembly/high programming languages
• assembly/low programming languages
• machine code languages which usually won’t be used as programming languages but theoretically are as such

Is this the only essential difference, if not, what else there is?

## Regular Languages Disprove with Counterexamples [on hold]

i’m new here and i have a question hope yu can help

Σ is a Alphabet and L, L′ ⊆ Σ* is any language.

Now i have to disprove the following equation:

L ◦ L = L und L ◦ L′ = L′ ◦ L

i have to give two counterexamples.

bg:)

## Is the choice of static and dynamic typing not visible to the programmers of the languages?

From Design Concepts in Programming Languages by Turbak

Although some dynamically typed languages have simple type markers (e.g., Perl variable names begin with a character that indicates the type of value: \$ for scalar values, @ for array values, and % for hash values (key/value pairs)), dynamically typed languages typically have no explicit type annotations.

The converse is true in statically typed languages, where explicit type annotations are the norm. Most languages descended from Algol 68 , such as Ada , C / C++ , Java , and Pascal , require that types be explicitly declared for all variables, all data-structure components, and all function/procedure/method parameters and return values. However, some languages (e.g., ML , Haskell , FX , Miranda ) achieve static typing without explicit type declarations via a technique called type reconstruction or type inference.

Question 1: For dynamically typed languages which “have no explicit type annotations”, do they need to infer/reconstruct the types/classes, by using some type/class reconstruction or type/class inference techniques, as statically typed languages do?

Question 2: The above quote says static or dynamic typing and explicit or no type annotations can mix and match.

• Is the choice between static and dynamic typing only internal to the implementations of programming languages, not visible to the programmers of the languages?

• Do programmers in programming languages only notice whether the languages use explicit type/class annotations or not, not whether the languages use static or dynamic typing? Specifically, do languages with explicit type/class annotations look the same to programmers, regardless of whether they are static or dynamic typing? Do languages without explicit type/class annotations look the same to programmers, regardless of whether they are static or dynamic typing?

Question 3: If you can understand the following quote from Practical Foundation of Programming Languages by Harper (a preview version is https://www.cs.cmu.edu/~rwh/pfpl/2nded.pdf),

• Do the syntax for numeral (abstract syntax num[n] or concrete syntax overline{n}) and abstraction (abstract syntax fun(x.d) or concrete syntax λ(x)d ) use explicit types/classes with dynamic typing?
• If yes, is the purpose of using explicit types/classes to avoid type inference/reconstruction?

Section 22.1 Dynamically Typed PCF

To illustrate dynamic typing, we formulate a dynamically typed version of PCF, called DPCF. The abstract syntax of DPCF is given by the following grammar:

Exp d :: = x x variable            num[n] overline{n}      numeral            zero zero      zero            succ(d) succ(d)      successor            ifz {d0; x.d1} (d) ifz d {zero → d0 | succ(x) → d1}      zero test            fun(x.d) λ(x)d      abstraction            ap(d1; d2) d1 (d2)      application            fix(x.d) fix x is d      recursion 

There are two classes of values in DPCF, the numbers, which have the form num[n], and the functions, which have the form fun(x.d). The expressions zero and succ(d) are not themselves values, but rather are constructors that evaluate to values. General recursion is deﬁnable using a ﬁxed point combinator but is taken as primitive here to simplify the analysis of the dynamics in Section 22.3.

As usual, the abstract syntax of DPCF is what matters, but we use the concrete syntax to improve readability. However, notational conveniences can obscure important details, such as the tagging of values with their class and the checking of these tags at run-time. For example, the concrete syntax for a number, overline{n}, suggests a “bare” representation, the abstract syntax reveals that the number is labeled with the class num to distinguish it from a function. Correspondingly, the concrete syntax for a function is λ (x) d, but its abstract syntax, fun(x.d), shows that it also sports a class label. The class labels are required to ensure safety by run-time checking, and must not be overlooked when comparing static with dynamic languages.

Thanks.

## Do PHP redirect information disclosure also apply to other languages or framework?

When redirecting using header("Location MyPage.php"); in PHP, any code present after will be executed. So, if you’re using this as a way to avoid user accessing pages where they should be logged in, the content of the page will still be processed and sent to the client. Using a proxy, you can set that despite returning a 302 error code, you’ll also get the content of the page.

My question is, what other languages or framework have this issue ?