What’s the difference between a rift to the Astral Plane and a gate to the Astral Plane?

In the description for the Portable Hole both “a rift to the Astral Plane” and “a gate to the Astral Plane” are mentioned. See the exert below (emphasis mine):

If a bag of holding is placed within a portable hole, a rift to the Astral Plane is torn in that place. Both the bag and the cloth are sucked into the void and forever lost. If a portable hole is placed within a bag of holding, it opens a gate to the Astral Plane. The hole, the bag, and any creatures within a 10-foot radius are drawn there, the portable hole and bag of holding being destroyed in the process.

What is the distinction between a rift and a gate to the astral plane?

What’s the difference between ioco, uioco and tioco in Model Based Testing?

I’m learning about formal languages and Label Transition Systems (LTSs) and how to test systems using Model-Based Testing. Specifically, the paper Model Based Testing with Labelled Transition Systems written by Jan Tretmans. He introduces the concepts ioco (input-output conformance), uioco (underspecified input-output conformance) and tioco (timed input-output conformance). I could follow his description of ioco and how the specification should adhere to the implementation. Or formally defined as:

$ i$ ioco $ s =_{def} \forall \sigma ∈ Straces(s) $ : $ output \ (i$ after $ \sigma) ⊆ output (s$ after $ \sigma) $

Which basically means:

$ implementation$ ioco-conforms to $ specification$ , iff

• if $ implementation$ produces output $ x$ after trace $ \sigma$ , then $ specification$ can produce $ x$ after $ \sigma$

• if $ implementation$ cannot produce any output after trace $ \sigma$ , then $ specification$ cannot produce any output after $ \sigma$

But then I couldn’t follow his description of what uioco and tioco mean and couldn’t find other sources that explain it differently. Could someone point me out what the latter two concepts are and what the difference between them and ioco is?

P.S. I also couldn’t find tags about LTSs nor the concepts of Model Based Testing and input-output conformance, so feel free to edit my tags to something more appropriate.

What’s the difference between amdgpu kernel driver and amdgpu pro userland userland driver?

AFAIK, AMD has a kernel dirver which is amdgpu, however, they also have a propritary dirver called amdgpu pro works on top of amdgpu. So, do I need to install propritary dirver to have more performance? What does pro driver give us? Will it impact gaming performance since OpenGL, Vulkan is only supported by pro driver.

Can someone explain the difference between MCP neurons and Perceptrons?

I am getting confused with the literature. Is a perceptron simply a network of MCP neurons? From what I understand, in 1957 Rosenblatt developed the perceptron based on relaxed constraints from the MCP neurons (by McCulloch and Pitts). Here are some statements I have come up with:

  1. MCP neurons treated every input equally (i.e. all weights set to 1). Perceptron introduced variable weights, which could therefore be trained.
  2. BOTH MCP and perceptron used a bias that was set to a single value.
  3. BOTH MCP and perceptron have Boolean inputs (at least originally).
  4. BOTH MCP and perceptron apply a threshold activation function (i.e. these networks can tell you is something is A or B, nothing more).

Are these true statements? I am very confused.

MIC vs PTK, whats the difference?

my question what is the difference between a MIC and a PTK, and what one are applications like Aircrack-ng and Pyrit concerned about? note I’m only concerned with WPA2-PSK

from my research a PTK is the pairwise transient key, and it consists of the Anonce( random prime number ), Snonce ( nonce from supplicant ), essid and pre shared key( passphrase ) , some sources say that these programs create PMKs and compare the PMKs to the PTKs is that correct?

but then some other sources say that what we are concerned about is the MIC, some say that the MIC is a hash value and programs like Pyrit and Aircrack create these MIC hashes and compare them to the MIC hash captured in the 4 way handshake?

which one is “really” correct?

source that says MIC – Four-way Handshake in WPA-Personal (WPA-PSK) that says PTK – can an attacker find WPA2 passphrase given WPA key data and WPA MIC


What is the difference between self-rewriting program and program-with-updates-and-restart

I am reading about Goedel machine http://people.idsia.ch/~juergen/goedelmachine.html and especially the article about possible implementation (in the Scheme language) of this machine http://people.idsia.ch/~juergen/selfreflection.pdf . The idea is that there is Scheme environment, then there is special virtual machine (written in Scheme) and at laste there is program (written in the special language for this virtual machine – L3 in the article) for this virtual machine. The virtual machine is quite unusual – it not only loads the program but it also provides extension point through which the program can modify the value of any variable of the executing program and also modify the instruction tree that sits in the memory of the virtual machine. That is the self-modifying progam.

I have 2 questions regarding self-modifying programs:

  1. Is there virtual machine (e.g. for Java or JavaScript), that exposes such facilities for modifying the values of the variables of the running program or for modifying the loaded program structure?
  2. I can not grasp the difference between run-time modification of the self-modifying program (let us call it the scenario S1) and between the following process (that can implement the self-modifying program scenario alternatively) (let us call it the scenario S2):
    1. Program L3 (continuing the notation to name the upper-most program) computes: 1) the updated program – new code; 2) the new values of all the variables of the old program and the initial values of the additional variables introduced anew by the updated program; 3) the codepoint of the updated program at which the execution should be started
    2. Program L3 exits and instructs the execution environment about the next steps that should be taken;
    3. Execution environment loads the updated program and assigns the values of the variables and the starting point/point of resume/point of load according to the point 1. and execution environment boosts the updated program.
    4. Updated program executes and computes the next version of the update in parallel.

Of course, there is third option (S3) – to use JavaScript eval construction – it keeps the current code running, but it spans the new code. So – is there difference between S1, S2, S3? The drawback of S3 is that the memory used by the previous versions of the code is never recovered and so – the program can not be life-long running.

So – what is the difference between S1 and S2? If we consider the conventional computing architecture where the program is executed deterministically (assuming we can control the time slices between tread execution and so on) then I can not see the difference between S1 and S2? In both cases the currently running program should decide which variables and how should be changed, which instruction should be executed next and so on, so on. Restart with update should be the same as the run-time modification of in-memory running program. Or – maybe there is any difference?

Just curious – are there any other applications, research efforts of self-modifying code (especially for Java and JavaScript) which can guide me in my efforts to understand how to implement the self-modifying programs?

There is http://commons.apache.org/proper/commons-bcel/index.html for Java, but I guess, that it does not allow to modify the values of the variables of the code itself or already loaded (for execution) Java class file. There is also culture of meta-circular interpreters https://en.wikipedia.org/wiki/Meta-circular_evaluator – but they are more the intellectual curiosities and the opening up of the running program (variables and code) is not the feature they offer, so, no use for self-modifying programs.