Nmap giving different result between Mac OS scan and Linux (Kali) scan

I try to do a simple TCP scan on an Kubunt VM (is on VirtualBox), from two different OS. From the Host (Mac OS system) and from a Kali Linux VM (tried VirtualBox and Parallels also). Kali Linux (same result VirtualBox and Parallels) gives:

enter image description here

From Mac OS (run with root privileges, to simulate the same scan from Kali):

enter image description here

So run a Nmap scan from Mac OS gives more open ports on the same VM, with the same privilege of scan, etc. Mind blowing…

If I scan just a port (from Kali VM) try the 110, the result is that is closed:

enter image description here

Why is it happening?

Finding the similarity between large text files

My first question is: is there an algorithm that already exists for this? If not any thoughts and ideas are appreciated.

Let’s say I have two large text files (original file A and new file B). Each file is English prose text (including dialogue) with a typical size 256K to 500K characters.

I want to compare them to find out how similar the contents of B are to A.

Similar in this case means: all, or a significant part, of B exists in A, with the condition that there may be subtle differences, words changed here and there, or even globally.

In all cases we have to remember that this is looking for similarity, not (necessarily) identity.

Preprocessing for the text:

  1. Remove all punctuation (and close up gaps “didn’t” -> “didnt”);
  2. Lowercase everything;
  3. Remove common words;
  4. Reduce all whitespace to single space only, but keep paragraphs;

Other possible optimisations to reduce future workload:

Ignore any paragraph of less than a certain length. Why? Because there’s a higher probability of natural duplication in shorter paragraphs (though arguably not in the same overall position).

Have an arbitrary cut-off length on the paragraphs. Why? Mostly because it reduces workload.

Finally:

For every word, turn it into a Metaphone. So instead of every paragraph being composed of normal words it becomes a list a metaphones which help in comparing slightly modified words.

We end up with paragraphs that look like this (each of these lines is a separate paragraph):

WNT TR0 ABT E0L JRTN TTKTF INSPK WLMS E0L UTRL OBNKS JRL TM RL SRPRS LKT TRKTL KM N SX WRT LT ASK W0R RT WRKS T ST N WLTNT RT 0M AL 

But I admit, when it comes to the comparison I’m not sure how to approach it, beyond a brute force take the first encoded paragraph from B (B[0]) and check every paragraph in A looking for a high match (maybe identical, maybe very similar). Perhaps we use Levenshtein to find a match percentage on the paragraphs.

If we find a match at A[n], then check B[1] against A[n+1] and maybe a couple further A[n+2] and A[n+3] just in case something was inserted.

And proceed that way.

What should be detected:

  • Near-identical text
  • Global proper noun changes
  • B is a subset of A

Thanks.

What’s the difference between a creature “can’t move” and a creature’s “speed becomes 0 and it can’t benefit from any bonus to its speed.”?

While reading through the conditions described in Appendix A of the 5e PHB, I realised, that some of them say a creature “can’t move” (e.g. Stunned, Unconscious), while others state that its “speed becomes 0” (e.g. Grappled, Restrained).

Can anyone explain to me the mechanical distinction between these two phrases?

EDIT: The answer provided by Zimul8r answered the literal question I asked: if a creature’s speed is reduced to 0 it could still benefit from effects that give it additional movement, while that’s not the case if it can’t move. Yet the conditions that state a creature’s speed becomes 0 also state it can’t get any bonus to its speed.

So there is still the question: Why do some conditions state “can’t move”, but other say “speed becomes 0, and it can’t benefit from any bonus to its speed.” Is there a difference between these two statements?

Is it lazy or inconvenient not to distinguish between password reset use cases in the UI?

I was recently asked to reset a password due to the fact that the security requirements for the website had been upgraded, and the users have been asked to change their passwords (for those that don’t meet the current standards).

Although the user interface simply asked you to provide an email address (to verify that it is an active account) with a call-to-action to change the password, when the email link is sent to my inbox, it was in the format of a ‘Forgotten Email’ page that had the same flow as if you clicked on the ‘Forgotten Email?’ link commonly seen at the sign-in page.

Is it simply more convenient to use exactly the same process, or is it simply lazy design or development not to make this distinction as it clearly has some effect on the user experience? Is this a common practice and if so why?

“How to read rules” — how did this change between editions?

It seems the role of rules1 was changed throughout the history of D&D.

For instance, as this answer says, there was an explicit destinction between “crunch” and “fluff” in fourth edition:

a distinction between “ignorable fluff” and the “real rules”

Such a separation is a prominent feature of 4e

Aside from this, were there any other changes on how the rules were supposed to be interpreted between editions? I’m primarily interested in the most popular “modern-ish” editions, so narrowing the question, how was the role of rules changed in 5th edition in comparison to 3.5e?


1: By “the role of rules” I mean “how the rules are meant to be used in order to run a game”.

3 column layout for a song site. Sidebar on left/right, and content between them

So I’m working on a website, an anime one to be precise, where people can read about anime songs, their singers, rate them/post comments to them, add them to their list etc., and this is the current layout of the site of a song.

I think it’s decent, and I’m planning to keep it this way, but I’d like to know what others, maybe some experts think about this layout.

So as you can see, on desktop view there is a sidebar menu on the left(which can’t be closed), and there is another one on the right, which contains some info about the song, such as the embedded youtube video, some statistics, info about the singer(if exists),anime, and the lyrics at the bottom. Both sidebars and the div between them are separately scrollable, but the scrollbars are hidden, because IMO it would ruin the design/layout.

And between them the content div is placed, which contains further information about the song(statistics, song recommendations, comment section etc).

Their width goes like this: 20-65-15

Desktop view

Desktop view

On mobile view, the sidebar on the right is being pushed to the top of the content div, since it perfectly fits in the screen size of a phone, and of course the menu(but without the search section) can be opened with the usual hamburger menu icon, but unlike most sites, it’s placed on the bottom.

Mobile view

Mobile view

This layout is also convenient when the user wishes to read all the comments. On that site, the layout is still the same, so the sidebar on the right is still visible. But this time it’s only visible on desktop view, so this is one privilege of those who visit the site on a PC.

Site of all comments

Do you think a layout like this would be convinent for the users? Or isn’t it too unusual to have sidebars on both sides?

How to find the distances between two adjacent maxima in a solution from NDSolve

I try to find a general approach to plot the time evolution of horizontal distances from maximum to maximum in a solution of PDE. The solution u[x,t] normally have multiple maximum and minimum in space x, which move in space x and evolve in time t.

Here is a simple example, in which the maxima and minima are periodic. But in my real problem they are not periodic and the distances between different pairs of adjacent max are different at a given t, also the distances between two adjacent max can change with t.

sol = NDSolve[{D[u[x, t], t] + u[x, t] D[u[x, t], x] + D[u[x, t], x, x] +  0.4*D[u[x, t], {x, 3}] + D[u[x, t], {x, 4}] == 0, u[-4 \[Pi], t] == u[4 \[Pi], t], u[x, 0] == 0.1*Sin[x]}, u, {t, 0, 20}, {x, -4 \[Pi], 4 \[Pi]}]  Plot3D[Evaluate[u[x, t] /. First[sol]], {t, 0, 10}, {x, -4 Pi, 4 Pi}, PlotRange -> All, PlotPoints -> 100] 

enter image description here

I have tried to use Table[FindMaximum[Evaluate[u[x, t] /. First[sol]], {x, x0}][[2, 1, 2]], {t,0,tend,0.01}] with an initial position x0 to find a local maximum. But I don’t know how to find two adjacent maxima simultaneously in order to plot the time evolution of their distance.

Minimum number of tree cuts so that each pair of trees alternates between strictly decreasing and strictly increasing

I want to find the minimum number of tree cuts so that each pair of trees in a sequence of tree alternates between strictly decreasing and strictly increasing. Example: In (2, 3, 5, 7) , the minimum number of tree cuts is 2 – a possible final solution is (2, 1, 5, 4).

My search model is a graph where each node is a possible configuration of all tree heights and each edge is a tree cut (= a decrease of the height of a tree). In this model, a possible path from the initial node to the goal node in the above example would be (2,3,5,7) – (2,1,5,7) – (2,1,5,4). I have used a breadth-first search in it to find the goal node. As BFS don’t traverse already traversed nodes, the part of the graph that I traverse during the search is in fact a tree data structure.

The only improvement to this algorithm that I was able to think was using a priority queue that orders the possible nodes to be explored in increasing order 1st by number of cuts (as traditional BFS already does) and 2nd by the number of strictly increasing/decreasing triplets. This increases the probability that a goal node with the minimum number N of cuts will be within the first nodes of all nodes with N cuts to be evaluated and the search can finish a little faster.

The time required to execute this algorithm grows exponentially with the number of the trees and the height of the trees. Is there any other algorithm/idea which could be used to speed it up ?

Essential difference between Assembly languages to all other programming languages

I understand that any assembly programming language has so little abstraction, so that one who programms with it (OS creator, hardware driver creator, “hacker” and so on), would have to know the relevant CPU’s architecture pattern very well — unlike someone that programms in any “higher” programming language.

For me, this requirement to know the relevant CPU’s architecture pattern very well is the essential difference between assembly programming languages to the rest of programming languages, so we get:

  • nonassembly/high programming languages
  • assembly/low programming languages
  • machine code languages which usually won’t be used as programming languages but theoretically are as such

Is this the only essential difference, if not, what else there is?