What’s the difference between amdgpu kernel driver and amdgpu pro userland userland driver?

AFAIK, AMD has a kernel dirver which is amdgpu, however, they also have a propritary dirver called amdgpu pro works on top of amdgpu. So, do I need to install propritary dirver to have more performance? What does pro driver give us? Will it impact gaming performance since OpenGL, Vulkan is only supported by pro driver.

Can someone explain the difference between MCP neurons and Perceptrons?

I am getting confused with the literature. Is a perceptron simply a network of MCP neurons? From what I understand, in 1957 Rosenblatt developed the perceptron based on relaxed constraints from the MCP neurons (by McCulloch and Pitts). Here are some statements I have come up with:

  1. MCP neurons treated every input equally (i.e. all weights set to 1). Perceptron introduced variable weights, which could therefore be trained.
  2. BOTH MCP and perceptron used a bias that was set to a single value.
  3. BOTH MCP and perceptron have Boolean inputs (at least originally).
  4. BOTH MCP and perceptron apply a threshold activation function (i.e. these networks can tell you is something is A or B, nothing more).

Are these true statements? I am very confused.

Prove disprove an existence of mapping reduction between 2 sets

I am currently studying mapping reduction in computational theory and finding it hard to grasp the concept fully.

For reference, consider the following given WHILE-Prog sets:

A = { (p.d) | p doesn’t halt on input d } = Complementary HALT set. B = { p | p halts on exactly one input (the input is unknown) }

Is A < B. meaning, is there a mapping reduction from A to B?

Can someone suggest a hint?

knowing that A belongs to coRE did not help much. B doesn’t seem to belong to either RE nor coRE.


Relationship between distances on homogeneous spaces and their Lie groups

Consider the (round) sphere $ M=\mathbb{S}^{n-1}$ as a homogeneous $ O(n)$ -space. Then for $ x,y\in\mathbb{S}^{n-1}$ there is $ g\in O(n)$ such that $ y=g\cdot x$ . Denote the Riemannian distance on $ \mathbb{S}^{n-1}$ by $ d_{\mathbb{S}^{n-1}}$ . Intuitively, if $ y$ and $ x$ are not far apart then $ g$ should be almost the identity (because the $ O(n)$ -action is smooth). I am able to explicitly construct such a rotation $ g$ so that \begin{align} \|g-\operatorname{Id_{\mathbb{R}^n}}\| \leq d_{\mathbb{S}^{n-1}}(y,x) \end{align} where $ \|\cdot\|$ is the operator norm for matrices.

Analoguously, if $ M=\mathrm{Gr}_m(\mathbb{R}^n)$ is the Grassmannian, then by a similar construction using principle angles I can find a rotation $ g$ such that $ F=g\cdot E$ for $ m$ -planes $ E,F$ and \begin{align} \|g-\operatorname{Id_{\mathbb{R}^n}}\| \leq 2m\; d_{\mathrm{Gr}_m(\mathbb{R}^n)}(F,E) \end{align} where $ d_{\mathrm{Gr}_m(\mathbb{R}^n)}$ is the angle metric on $ \mathrm{Gr}_m(\mathbb{R}^n)$ (e.g. here).

However, I find these constructions rather unsatisfying and would like to understand if there is a more abstract underlying principle at play.

Here is my question: Given a homogeneous $ G$ -space $ M$ , are there always metrics on $ M$ and $ G$ such that there is a quantitative estimate \begin{align} d_G(g,e) \leq C \; d_M(g\cdot y, x) \end{align} for all $ x,y\in M$ and $ g\in G$ ?

Feel free to add any hypotheses (such as compactness etc) that apply to $ \mathbb{S}^{n-1}$ , $ \mathrm{Gr}_m(\mathbb{R}^n)$ and $ O(n)$ .

Is there a correspondence of steps between DPLL and sequent-calculus?

Is there a correspondence between the steps in using DPLL to find out that a formula in propositional logic is unsatisfiable and using sequent calculus to prove that its negation is valid?

And given that the latter problem asks for less information (e.g. no need for a “counterexample” in case the formula is invalid), is it possible to solve it more efficiently than standard SAT solving?

What is the difference between self-rewriting program and program-with-updates-and-restart

I am reading about Goedel machine http://people.idsia.ch/~juergen/goedelmachine.html and especially the article about possible implementation (in the Scheme language) of this machine http://people.idsia.ch/~juergen/selfreflection.pdf . The idea is that there is Scheme environment, then there is special virtual machine (written in Scheme) and at laste there is program (written in the special language for this virtual machine – L3 in the article) for this virtual machine. The virtual machine is quite unusual – it not only loads the program but it also provides extension point through which the program can modify the value of any variable of the executing program and also modify the instruction tree that sits in the memory of the virtual machine. That is the self-modifying progam.

I have 2 questions regarding self-modifying programs:

  1. Is there virtual machine (e.g. for Java or JavaScript), that exposes such facilities for modifying the values of the variables of the running program or for modifying the loaded program structure?
  2. I can not grasp the difference between run-time modification of the self-modifying program (let us call it the scenario S1) and between the following process (that can implement the self-modifying program scenario alternatively) (let us call it the scenario S2):
    1. Program L3 (continuing the notation to name the upper-most program) computes: 1) the updated program – new code; 2) the new values of all the variables of the old program and the initial values of the additional variables introduced anew by the updated program; 3) the codepoint of the updated program at which the execution should be started
    2. Program L3 exits and instructs the execution environment about the next steps that should be taken;
    3. Execution environment loads the updated program and assigns the values of the variables and the starting point/point of resume/point of load according to the point 1. and execution environment boosts the updated program.
    4. Updated program executes and computes the next version of the update in parallel.

Of course, there is third option (S3) – to use JavaScript eval construction – it keeps the current code running, but it spans the new code. So – is there difference between S1, S2, S3? The drawback of S3 is that the memory used by the previous versions of the code is never recovered and so – the program can not be life-long running.

So – what is the difference between S1 and S2? If we consider the conventional computing architecture where the program is executed deterministically (assuming we can control the time slices between tread execution and so on) then I can not see the difference between S1 and S2? In both cases the currently running program should decide which variables and how should be changed, which instruction should be executed next and so on, so on. Restart with update should be the same as the run-time modification of in-memory running program. Or – maybe there is any difference?

Just curious – are there any other applications, research efforts of self-modifying code (especially for Java and JavaScript) which can guide me in my efforts to understand how to implement the self-modifying programs?

There is http://commons.apache.org/proper/commons-bcel/index.html for Java, but I guess, that it does not allow to modify the values of the variables of the code itself or already loaded (for execution) Java class file. There is also culture of meta-circular interpreters https://en.wikipedia.org/wiki/Meta-circular_evaluator – but they are more the intellectual curiosities and the opening up of the running program (variables and code) is not the feature they offer, so, no use for self-modifying programs.

Relation between $[L \cap M : K \cap M]$ and $[L : K]$, and the Gauß-Wantzel theorem

The well-known Gauß-Wantzel Theorem states that a real number $ x$ can be constructed using straightedge and compass only if the minimal polynomial of $ x$ (over the field $ \mathbf Q$ ) has degree of form $ 2^n$ , $ n \in \mathbf N$ .

Is is a corollary of a more general theorem, named “Wantzel Theorem” in French, which (under the form I know) states that :

Wantzel Theorem

The real number $ x$ can be constructed using straightedge and compass if and only if there exists a sequence of commutative fields $ L_0 \subset L_1 \subset … \subset L_n$ such that:

  1. $ L_0 = \mathbf Q$
  2. $ x \in L_n$
  3. For all $ i = 1, …, n$ , $ [L_i : L_{i-1}] = 2$

I wonder whether in the latter theorem, condition 2 could be replaced by $ L_n = \mathbf Q[x]$ .

Of course, this constraint 2′ implies constraint 2, so we have one implication.

To get the other implication, I assume I have a sequence $ L_0 \subset L_1 \subset … \subset L_n$ matching conditions 1, 2 and 3, and I set for $ i = 0, …, n$ , $ L’_i = L_i \cap \mathbf Q[x]$ . Then I need to prove that for $ i=1, …,n$ , $ [L’_i : L’_{i-1}] \le 2$ .

My question is: is the latter statement true?

In a more general way, if $ L$ is a finite extension of field $ K$ , and $ M$ is another field, what can we say about $ [L \cap M : K \cap M]$ ?

Thanks in advance!