## 2D kernel density estimation (SmoothKernelDistribution) with bin width estimation: what are the bin values that Mathematica chooses?

Mathematica has builtin bin estimation including the rules Scott, SheatherJones and Silverman (the default one); they work in both 1D and multiple dimensions. Most of the statistical documentation that I could find of these bin-width rules are for 1D data. Their implementation for 2D or higher dimensions seems not, as far as I know, so robust.

I could not find a Mathematica documentation on how exactly these rules are implemented in any dimensions. For the Silverman case, there is a nice question about it that raises very important subtleties: About Silverman's bandwidth selection in SmoothKernelDistribution .

For 2D data, my first guess was that Mathematica uses the same 1D algorithm, but for each of the axis, thus yielding a diagonal bin matrix. Hence, I extended the code provided in the previous link to 2D as follows:

Clear[data, silvermanBandwidth]; silvermanBandwidth[data_] := silvermanBandwidth[data] = Block[   {m, n},   m = MapThread[Min @ {#1, #2} &,     {       StandardDeviation @ data,       InterquartileRange[data, {{0, 0}, {1, 0}}]/1.349     }   ];   n = Length @ data;   0.9 m/n^(1/5) ]; 

(In the statistical literature I could find different conventions for rounding the real numbers that appear in the above code, I do not know precisely which version Mathematica picks; anyway the problem below is larger than these small rounding changes).

The approach above (and a few variations I tried) is quite close to what Mathematica does in 2D, but it is not identical. Here is an example:

data = RandomReal[1, {100, 2}]; silvermanWMDist = SmoothKernelDistribution @ data; silvermanMyDist = SmoothKernelDistribution[data, silvermanBandwidth @ data, "Gaussian"]; ContourPlot[PDF[silvermanWMDist, {x, y}],   {x, -0.1, 1.1},   {y, -0.1, 1.1} ] ContourPlot[PDF[silvermanMyDist, {x, y}],   {x, -0.1, 1.1},   {y, -0.1, 1.1} ] My questions are: how Silverman’s rule is implemented in Mathematica for 2D data? Is there a way to print out Mathematica’s derived bin matrix, either for Silverman or any other rule?

## Is Exit (no square brackets) equivalent to Quit[] for refreshing the Kernel from within an Evaluation Notebook?

I prefer to use Exit as it conveniently requires fewer key presses over Quit[]. But before I use it regularly I need to know if there any subtle differences between Quit[] and Exit. The Wolfram documentation pages for Quit and Exit appear to be very similar and even call these two functions synonymous but I just need to be sure.

Thanks.

Posted on Categories cheapest proxies

## What does a kernel of size n,n^2 ,… mean?

So according to Wikipedia,

In the Notation of [Flum and Grohe (2006)], a ”parameterized problem” consists of a decision problem $$L\subseteq\Sigma^*$$ and a function $$\kappa:\Sigma^*\to N$$, the parameterization. The ”parameter” of an instance $$x$$ is the number $$\kappa(x)$$. A ”’kernelization”’ for a parameterized problem $$L$$ is an algorithm that takes an instance $$x$$ with parameter $$k$$ and maps it in polynomial time to an instance $$y$$ such that

• $$x$$ is in $$L$$ if and only if $$y$$ is in $$L$$ and
• the size of $$y$$ is bounded by a computable function $$f$$ in $$k$$. Note that in this notation, the bound on the size of $$y$$ implies that the parameter of $$y$$ is also bounded by a function in $$k$$.

The function $$f$$ is often referred to as the size of the kernel. If $$f=k^{O(1)}$$, it is said that $$L$$ admits a polynomial kernel. Similarly, for $$f={O(k)}$$, the problem admits linear kernel. ”’

Stupid question, but since the parameter can be anything can’t you just define the parameter to be really large and then you always have linear kernel?

## Is this Ubuntu kernel version vulnerable to dirty cow? [closed]

I am attempting to escalate privileges on a CTF Ubuntu box but I am afraid to run dirty cow due to possible crash is this kernel version vulnerable to the exploit:

Linux ip-10.0.0.1 3.13.0-162-generic #212-Ubuntu SMP Mon Oct 29 12:08:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux?

The Ubuntu version is Ubuntu 14.04

Dirty cow documentation shows Ubuntu 14 versions < 3.13.0-100.147 are vulnerable although I am confused as to if this version is vulnerable and want to be somewhat positive before running it on the CTF / CapturetheFlag machine.

Posted on Categories proxies

As far as I read in a OS text book (Operating Systems by Silberschatz) Kernel mode is for privileged task, so it it true to claim that "User Level Thread can read/write Kernel threads" ?

Generally speaking, Is there any kind of protection between user and kernel level threads?

## Security of NGFW OS and kernel

I know there are lots of different providers, but let us focus on the bigger ones and the ones running some kind of Linux. In the end they are all some kind of huge packet parsing engine and I guess many options will be enabled in the kernel. But I’m not sure about that nor can you find much info on how they do networking under the hood. Are they doing something specifically different than a normal Linux system in terms of kernel/program security and networking? Or are they more or less the average Linux router with iptables + a nice gui and analytics ?

When I look through some patches/changelogs I regularly see CVE’s with high risk so I am wondering if they can make the network security actually worse.

## How to understand mapping function of kernel?

For a kernel function, we have two conditions one is that it should be symmetric which is easy to understand intuitively because dot products are symmetric as well and our kernel should also follow this. The other condition is given below

There exists a map $$φ:R^d→H$$ called kernel feature map into some high dimensional feature space H such that $$∀x,x′$$ in $$R^d:k(x,x′)=<φ(x),φ(x′)>$$

I understand that this means that there should exist a feature map that will project the data from low dimension to any high dimension D and kernel function will take the dot product in that space.

For example, the Euclidean distance is given as

$$d(x,y)=∑_i(x_i−y_i)^2=+−2$$

If I look this in terms of the second condition how do we know that doesn’t exist any feature map for euclidean distance? What exactly are we looking in feature maps mathematically?

Posted on Categories proxies

## Any exploit details regarding CVE-2019-3846 : Linux Kernel ‘marvell/mwifiex/scan.c’ Heap Buffer Overflow Vulnerability

How to get this exploit working or any method for this.

It is seen that various Linux version < 8 is vulnerable to this issue

Linux Kernel ‘marvell/mwifiex/scan.c’ Heap Buffer Overflow Vulnerability

Issue Description: A flaw that allowed an attacker to corrupt memory and possibly escalate privileges was found in the mwifiex kernel module while connecting to a malicious wireless network.

Can you share exploit details regarding this.?

https://vulners.com/cve/CVE-2019-3846 https://www.securityfocus.com/bid/69867/exploit : NO exploit there

Any tips on how to exploit this.

Posted on Categories proxies

## How to build Linux Volatility Profiles With the Compiled Kernel

I’m familiar with creating Linux memory profiles as stated here. However, this is assuming that I have access to the live system which often times is not the case.

I heard there is a way to build the profile with the compiled linux kernel but I cannot find any documentation on how to do that through googling. Is anyone familiar with building volatility profiles from the compiled kernel and if so willing to provide instructions on how to do so?

Thanks!

Posted on Categories proxies

## How can a classifier using lapacian kernel achieve no error on the input samples?

If we have a sample dataset $$S = \{(x_1, y_i),…(x_n,y_n)\}$$ where $$y_i = \{0,1\}$$, how can we tune $$\sigma$$ such that there is no error on $$S$$ from a classifier using the Laplacian kernel?

Laplacian Kernel is

$$K(x,x’) = exp(-\dfrac{\| x – x’\|}{\sigma})$$

If this is true, does it mean that if we run hard-SVM with the Laplacian kernel and $$\sigma$$ from the above on $$S$$, we can find no error separing classifier also?

Posted on Categories proxies