How to understand mapping function of kernel?

For a kernel function, we have two conditions one is that it should be symmetric which is easy to understand intuitively because dot products are symmetric as well and our kernel should also follow this. The other condition is given below

There exists a map $ φ:R^d→H$ called kernel feature map into some high dimensional feature space H such that $ ∀x,x′$ in $ R^d:k(x,x′)=<φ(x),φ(x′)>$

I understand that this means that there should exist a feature map that will project the data from low dimension to any high dimension D and kernel function will take the dot product in that space.

For example, the Euclidean distance is given as

$ d(x,y)=∑_i(x_i−y_i)^2=<x,x>+<y,y>−2<x,y>$

If I look this in terms of the second condition how do we know that doesn’t exist any feature map for euclidean distance? What exactly are we looking in feature maps mathematically?

Any exploit details regarding CVE-2019-3846 : Linux Kernel ‘marvell/mwifiex/scan.c’ Heap Buffer Overflow Vulnerability

How to get this exploit working or any method for this.

I have seen and read a lot about this issue at various references

It is seen that various Linux version < 8 is vulnerable to this issue

Linux Kernel ‘marvell/mwifiex/scan.c’ Heap Buffer Overflow Vulnerability

Issue Description: A flaw that allowed an attacker to corrupt memory and possibly escalate privileges was found in the mwifiex kernel module while connecting to a malicious wireless network.

Can you share exploit details regarding this.?

https://vulners.com/cve/CVE-2019-3846 https://www.securityfocus.com/bid/69867/exploit : NO exploit there

Any tips on how to exploit this.

How to build Linux Volatility Profiles With the Compiled Kernel

I’m familiar with creating Linux memory profiles as stated here. However, this is assuming that I have access to the live system which often times is not the case.

I heard there is a way to build the profile with the compiled linux kernel but I cannot find any documentation on how to do that through googling. Is anyone familiar with building volatility profiles from the compiled kernel and if so willing to provide instructions on how to do so?

Thanks!

How can a classifier using lapacian kernel achieve no error on the input samples?

If we have a sample dataset $ S = \{(x_1, y_i),…(x_n,y_n)\}$ where $ y_i = \{0,1\}$ , how can we tune $ \sigma$ such that there is no error on $ S$ from a classifier using the Laplacian kernel?

Laplacian Kernel is

$ $ K(x,x’) = exp(-\dfrac{\| x – x’\|}{\sigma}) $ $

If this is true, does it mean that if we run hard-SVM with the Laplacian kernel and $ \sigma$ from the above on $ S$ , we can find no error separing classifier also?

Kernel ROP crashes running OS

I was experimenting to see if I can make an ROP chain within the kernel. In the kernel debugging mode, I can make the first jump to an arbitrary gadget address without any problem. But the problem occurs after that. If I want to continue the kernel by typing continue, Kernel freeze. OS did not respond, I have to restart my VM to get back to the working state again.

Now my understanding is, As I jump or return to a random address (gadget) in the kernel, my stack contains doesn’t change as a normal function call would do. Therefore, when I execute the gadget instructions one by one or continue to run the kernel if the instructions needed some value from the stack which is not present there, as a result, kernel crash.

So, My question is, how can I jump/return to multiple gadgets and after running all the gadgets continue to run the kernel without crashing it. I guess there is no easy and straight forward answer for it. But if someone can give some basic idea about where to start that would be very helpful. Thank you in advance.

Algorithm for finding an irreducible kernel of a DAG in O(V*e) time, where e is number of edges in output

An irreducible kernel is the term used in Handbook of Theoretical Computer Science (HTCS), Volume A “Algorithms and Complexity” in the chapter on graph algorithms. Given a directed graph G=(V,E), an irreducible kernel is a graph G’=(V,E’) where E’ is a subset of E, and both G and G’ have the same reachability (i.e. their transitive closures are the same), and removing any edge from E’ would not satisfy this condition, i.e. E’ is minimal (although not necessarily the minimum size possible).

A minimum equivalent graph is similar, except it also has the fewest number of edges among all such graphs. Both of these concepts are similar to a transitive reduction, but not the same because a transitive reduction is allowed to have edges that are not in E.

HTCS says that there is an algorithm to calculate an irreducible kernel of a directed acyclic graph in time O(V*e) time, where V is the number of vertices, and e is the number of edges in the irreducible kernel, i.e. the output of the algorithm. The reference given for this is the following paper, which I have not been able to find an on line source for yet (links or other sources welcome — I can ask at a research library soon if nothing turns up).

Noltemeier, H., “Reduction of directed graphs to irreducible kenrels”, Discussion paper 7505, Lehrstuhl Mathematische Verfahrenforschung (Operations Research) und Datenverarbeitung, Univ. Gottingen, Gottingen, 1975.

Does anyone know what this algorithm is? It surprises me a little that it includes the number of edges in the output graph, since that would mean it should run in O(n^2) time given an input graph with O(n^2) edges that represents a total order, e.g. all nodes are assigned integers from 1 up to n, and there is an edge from node i to j if i < j. That doesn’t seem impossible, mind you, simply surprising.