How Microsoft Windows Operating System handle process management, file and memory management? Or how process, file and memory management is implemented in Windows?
How concerned should I be about this result from unhide?
unhide-linux scan starting at: 17:02:43, 2020-03-23 Used options: logtofile [*]Searching for Hidden processes through /proc stat scanning
[*]Searching for Hidden processes through /proc chdir scanning
[*]Searching for Hidden processes through /proc opendir scanning
[*]Searching for Hidden thread through /proc/pid/task readdir scanning
[*]Starting scanning using brute force against PIDS with fork()
Found HIDDEN PID: 20540 Cmdline: “” Executable: “” ” … maybe a transitory process” [*]Starting scanning using brute force against PIDS with pthread functions
[*]Searching for Fake processes by verifying that all threads seen by ps are also seen by others
[*]Searching for Hidden processes through sysinfo() scanning
1 HIDDEN Processes Found sysinfo.procs reports 565 processes and ps sees 566 processes unhide-linux scan ending at: 17:04:57, 2020-03-23
To reduce disk space I have planned to use a hard link instead of full copy, Is there any security issue while two different processes using different hard links of the same executable file as base?
I’m trying to hack my own WiFi using aircrack but have had no success. With aircrack I cannot achieve a successful handshake as the deauth doesn’t seem to have any effect on my targeted devices. This is what it outputs:
root@RPI02:~# aircrack-ng -w password.lst *.cap Opening WIFI_APPLE.cap-01.cap.. Read 180751 packets. # BSSID ESSID Encryption 1 F1:2E:DG:F2:EE:0F WIFI APPLE WPA (0 handshake) Choosing first network as target. Opening WIFI_APPLE.cap-01.cap.. Read 180751 packets. 1 potential targets **Packets contained no EAPOL data; unable to process this AP.**
What exactly means this line?
Packets contained no EAPOL data; unable to process this AP.
I Observed a large amount of data been send from one of our machines , after investigation through EDR i found out that chrome.exe process is initiating this connection toward presence.api.drift.com the total amount during 4 hours is 2.83GB.
I’m trying to pinpoint why chrome is doing this I’m afraid that its data ex-filtration , Any suggestion can be helpful.
I am trying to implement a completely fair scheduler. I want to know how to calculate the initial value of virtual run time of a process in order to insert the process into a red black tree
So the question is : Is it theoretically possible to feed a neural network with some random values to expect an output since randomness is a lack of knowledge in most case.
For this question, I’ve got some examples.
First case, not a real problem?
We just throw a coin and get the result and we do that a whole bunch of times. For each throw, we get the initial condition (air pressure, force, etc.). Now we put all this data into the neural network for it to process.
My guess : The result is not really random since it only depends on the initial conditions so it’s possible and the neural network will do a great job. So I guess that example is not a real problem since the “randomness degree” is weak.
Second case, questioning
Now, we generate a random list of number and sentences that correspond to each other so we have something like :
'zefvkbdl' -> 1613841.009 'nfeovhlzm' -> 963478.29 'jhgcjbklnsczl' -> 1.535953 'ergz' -> 9138630.26 etc ...
In a way that everything was randomdly generated (still, the list of sentences and number were not generated seperatly but each number was generated after a sentence and correspond to that sentence). In that case, is it possible to give a neural network the half of the list (the list can be forever long) and expect it to predict the other half with a great precision ?
My guess : It depends on the generation algorithm but let’s pretend that a letter is just a particular index in an array and that the index was randomly generated. Since most of the time, numbers are randomly generated thanks to the digits of time (the last decimals that are changing extremely fast) I’m not sure of that – I guess it might be possible with an extremly powerful neural network to theoretically do that job.
Let’s now be even more theoretical and consider it is possible to store somehow the global state of the universe at each moment of time. The only thing that is truly random at my knowledge is quantum mechanics so let’s try it out. At each point of time, we store the whole universe state and the outcome of measuring a quantum particle state (like the spin of an electron). Is it possible, after the biggest neural network training, to “predict” the outcome of measuring a quantum particle state knowing the state of the universe ?
Since I’m just a curious student, I don’t have a lot of knowledge in neural network or quantum mechanics so I probably said a lot of wrong things and I’m sorry for that. I thank you for reading all of this and I hope someone is able to help me anwser or correct me.
Now, the real question I’m asking is : Do randomness truly exist ?
Packed an AutoHotkey script with ahk2exe using MPress, got 13 hits on the VirusTotal online scan for the zipped result. Packed the same script with the same ahk2exe without MPress and got just 6 hits on the zipped result. Zip performed with identical 7z defaults in both cases.
Some of the virus agents associated with red flags in the first scan now come up as “Unable to process file type” in the second report. Why is this?
I’m running Windows 10, and since a while I’ve noticed an excessively high amount of Rundll32’s running in the background. It just seems weird to me that there are 55 of these instances running at once. I’ve ran a Malware scan, but nothing comes up.
I am having a very hard time understanding tree based DP problems. I am fairly comfortable with array based DP problems but I cannot come up with the correct thought process for tree based problems and I was hoping somebody could please explain their thought process.
I will talk about my thought process behind array based problems and then explain my troubles with tree based DP problems.
My thought process for array problems
The way I think about DP in array based problems is as follows. Let us consider a problem like Minimum Path Sum. Here the objective is to get from the top left to bottom right positions in a matrix such that we minimize the cost of the path. We can only move left and right.
The way I would approach problems like this is as follows:
- First I would construct a recurrence. In this case the recurrence is as follows
The recurrence is:
f(i, j) = a[i][j] // if i == m and j == n f(i, j) = a[i][j] + f(i, j+1) // if i == m f(i, j) = a[i][j] + f(i+1, j) // if j == n f(i, j) = a[i][j] + Math.min( f(i, j+1), f(i+1, j) ) // Otherwise
Next I look at the last equation
f(i, j) = a[i][j] + Math.min( f(i, j+1), f(i+1, j) )which tells me the problem can be solved using DP as there are overlapping subproblems in
f(i+1, j) and f(i, j+1). There is also an optimal substructure.
I can also tell the time/space complexity just by looking at the recurrence.
- Because we must compute all states which is all (i,j) pairs and because time per state is O(1) (adding a[i][j] to result) the time complexity is O(n^2).
- Looking at the recurrence, i depends only on i+1 and not on i+2, i+3 … similarly j depends only on j+1 and not on j+2, j+3… so we can get away with using only 1 extra row (either i+1 or j+1) instead of the entire matrix so space complexity is O(n).
Hence I would come up with a n^2 time and n space solution. I can do this without any problems.
My thought process for tree problems
However I am having a hard time applying the same thought process to tree based DP problems. As an example let us consider the problem Diameter of Binary Tree where the objective is to find the longest path between any 2 nodes in the tree.
I can come up with a recurrence for this problem which is as follows:
f(n) = 0 // if n == null f(n) = max( 1+height(n.left) + height(n.right), // longest path passing through root f(n.left), // longest path in left subtree f(n.right) // longest path in right subtree
f(n.left) for example is computed by doing
1+height(n.left.left) + height(n.left.right) I can tell that DP must be used.
So my approach would be to create a cache of size ‘n’ that stores all the heights of the nodes. So the space complexity would be O(n).
However the optimal solution of this problem has a space complexity of O(1) and I am having a hard time figuring that out just by looking at the recurrence. How does the recurrence tell you that space complexity can be reduced and that O(1) space is enough and O(n) is not needed? How do you know what value(s) to store in this case? In array based problems I can get the answers to both these questions just by looking at the recurrence but for tree based dp it is not so obvious to me.
What can you tell about this problem just by looking at the recurrence for the tree problem? Putting aside my own thought process, if I gave you this recurrence and nothing else what conclusions would you reach and how would you write the program? I am curious about your thought process.
For array based problems I can tell just by looking at the recurrence both how much space I needed to solve the problem AND what exactly I needed to store (I need to store values of row i+1 in min path sum and nothing else). How can I do the same for the tree problem?