## Monitoring Parallel Processes

I have a problem where my For loop will not get rid of my PrintTemporary monitor, and it means I end up with loads and loads of progress bars.

(* set number of kerlnels to use *)  numkernel = 4;  (* how high do you want to count to? *)  iterations = 15;   (* for monitoring progress over multiple kernels *)  progress[current_, total_] :=   Column[{StringRiffle[ToString /@ {current, total}, " of "],     ProgressIndicator[current, {0, total}]}, Alignment -> Center]  (* counting very slowly *)  takeyourtime[howmany_, kernelnum_] := Module[{},    For[i = 1, i <= howmany, i++,     status[[Key[\$  KernelID]]] = <|       "Kernel" -> "Kernel " <> ToString@kernelnum,        "Monitor" -> progress[i, howmany]|>;     Pause[1];     ];   ]  (* will count up to iterations, k times over numkernel kernels *)  For[k = 1, k <= 3, k = k + 1;,   Print[k];    (*clean up*)   Clear[parrallel];   (* how to divide the counting *)   division = Round[iterations/numkernel];   (* remainder on the first kernel *)   divisionmod =    Round[iterations/numkernel] + Mod[iterations, numkernel];   (* put the first evaluation in, with the remainder*)   parrallel = {Hold[ParallelSubmit[{takeyourtime[divisionmod, 1]}]]};   (*for number of kernels, set up the parrallel stuff *)   For[i = 2, i <=  numkernel, i++ ,    parrallel =     Append[parrallel,      Hold[ParallelSubmit[{takeyourtime[division, 1]}]]];   (*to this, so we can add i and do the monitoring better,    have to be careful when expressions are evaluated,    want to keep that for the parralel bit *)    parrallel[[i]] =     StringReplace[ToString[parrallel[[i]]],      ", i]" -> ", " <> ToString[i] <> "]"];   parrallel[[i]] = ToExpression[parrallel[[i]]];   ] ;   LaunchKernels[];   (* the monitoring variable we will print *)   status =    Association @@     ParallelTable[$$KernelID -> <|"Kernel" -> "", "Monitor" -> ""|>, {i,$$KernelCount}];   (*distribute the required definitions around the kernels *)   DistributeDefinitions[takeyourtime, division, divisionmod ];   SetSharedVariable[status];   (* this is the monitoring bit *)   PrintTemporary[   Dynamic[Row[     Riffle[Column[#, Alignment -> Center] & /@        Query[Values, Values]@Select[#"Monitor" =!= "" &]@status,       Spacer[5]]]]];   (*execute the parralel process *)   results = WaitAll[ReleaseHold[parrallel]];   CloseKernels[];  ] 

## In Markov Decision Processes, why does R0 get skipped?

I’m in the process of learning the MDP, and a pretty small thing is bugging me. Everywhere I look, I see the order of things go in this order:

$$S_{0}, A_{0}, R_{1}, S_{1}, A_{1}, R_{2}, \ldots, A_{t}, S_{t}, R_{t}$$

My question is, why did $$R_{0}$$ get skipped?

## Countering a system killing UID/GID=0 processes in Android

Suppose that there were a security system in an Android kernel meant to prevent exploits that have arbitrary kernel memory read/write from getting root privileges. This system,

1. Kills a process by using force_sig() with SIGKILL if the process UID or GID is 0 and if the system decides it shouldn’t be.
2. Depends on kernel variables that are read-only after init. (on/off status)

If we assume that the system decides with complete accuracy in [1] above, and KASLR is not present on the device, what can an exploit do to counter this system and get root IDs?

What I can think of:

1. Disabling SIGKILL temporarily:
If SIGKILL can be disabled temporarily (or even permanently until reboot) then the system is essentially useless, but I have yet to find a way to disable SIGKILL through kernel memory write.
2. Disabling the system by flipping the read-only bits somehow:
This is unlikely to be possible but included for the sake of completeness.
3. Editing the text sections of kernel memory to patch the functions:
Also unlikely to be possible because the text section is read-only.

## High Availability Boot processes and only using code-signing certificates

High Availability Boot (HAB) is a technique described here in an NxP application note. This is best summarised as:

HAB authentication is based on public key cryptography using the RSA algorithm in which image data is signed offline using a series of private keys. The resulting signed image data is then verified on the i.MX processor using the corresponding public keys. This key structure is known as a PKI tree. Super Root Keys, or SRK, are components of the PKI tree. HAB relies on a table of the public SRKs to be hashed and placed in fuses on the target.

The procedure burns Super Root Key (SRK) fuses using a software tool called srktool. In it’s proper use, I would use an SSL certificate with the OID set for code-signing. This would have an oid of 1.3.6.1.5.5.7.3.3.

However, there doesn’t appear to be anything that stops me from using a certificate that is created for other purposes, e.g. for client authentication with the OID of 1.3.6.1.5.5.7.3.2.

The problem is that if I have two certificates from the same CA:

1. Code-signing certificate
2. Client certificate

I could sign the image with the code-signing certificate. If I could update the public key on the target device, then it would be possible to sign it with the client certificate and it would be accepted as valid.

The only option is use different CAs for both code-signing and client certs. I’m wondering if there’s some way to check the OIDs?

## Modeling a set of probabilistic concurrent processes

I’m looking into discrete-time Markov chains (DTMCs) for use in analyzing a probabilistic consensus protocol. One basic thing I haven’t been able to figure out is how to model a set of independent processes: consider $$N$$ processes. These processes will concurrently execute a series of identical instructions labeled $$0, 1, 2, 3,$$ etc. and all are starting in instruction $$0$$. When probability is not involved, modeling this is simple: it’s a state machine which branches nondeterministically off the start state to $$N$$ different states, where in each of those $$N$$ states a different process was the first to execute instruction $$0$$. What do we do when probability is involved? Do we do the same thing with $$N$$ states branching from the start state, where the probability of transitioning to each state is $$\frac{1}{N}$$? As in, it’s uniformly random which process was the first one to execute instruction $$0$$?

Is this like taking the product of the state machines of each process?

I’m using a DTMC here, would I gain anything by moving to a CTMC if I don’t care about anything beyond the global order of execution?

Bonus question: assigning probabilities to whichever action (process executing an instruction) is taken first seems like a generalization of the non-probabilistic notion of fairness; if it is, what is the formal definition of this generalized notion of probabilistic fairness?

## Two processes doing extensive calculations – I want one to get ~100% of processor time – how?

I am running Ubuntu basic server with two processes: – process 1 – performing calculation 100% of uptime, and which I use to share computing power to community (it’s running @ prio 19) – process 2 – performing calculations for 5-10mins, from time to time, which I use to compute for me (it’s running @prio -19)

I want process 2 to be given with 100% of computing power (process 1 is at that moment should get close to 0% of CPU). But best what I get is 50% of CPU for process 1 and 50% of CPU for process 2 (checked with htop).

I don’t want to manually stop/start any process when I need computing power (both processes must be running all the time); 100% of CPU for process 2 must be given automatically.

What should I do to achieve my goal? Thanks.

## External drive causes all processes talking to it to freeze. Has the drive failed?

My external drive cannot be accessed. It seems that the auto-mount hangs. When I click “Mount”, I get “unable to access volume – an operation is already pending.” When I try to remove the drive, I get: “Unable to Stop WCD – Error opening /dev/sdb for fsync: Device or resource busy.”

fdisk -l /dev/sdb hangs.

dmesg contains the following recent warnings:

task scsi_eh_6:5019 blocked for more than 120 seconds ... task fdisk blocked for more than 120 seconds ... task mount:5301 blocked for more than 120 seconds ... task pool-udisksd:5059 blocked for more than 120 seconds 

Etc. Every time I try to access the drive, the process trying to access it hangs and can’t access it. I tried to run badblocks, and even it froze after twenty minutes – badblocks blocked for more than 120 seconds, etc.

I am not sure what kind of hardware problem causes every process trying to talk to the disk (including all diagnostic tools) to freeze. None of them can be terminated by Ctrl-C, I have to exit the terminal. What should I do?

Update:

=== START OF INFORMATION SECTION === Model Family:     Western Digital Blue Device Model:     WDC WD10EZEX-08WN4A0 Serial Number:    WD-WCC6Y0KC7LX4 LU WWN Device Id: 5 0014ee 20e20948d Firmware Version: 01.01A01 User Capacity:    1,000,204,886,016 bytes [1.00 TB] Sector Sizes:     512 bytes logical, 4096 bytes physical Rotation Rate:    7200 rpm Form Factor:      3.5 inches Device is:        In smartctl database [for details use: -P show] ATA Version is:   ACS-3 T13/2161-D revision 3b SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is:    Thu Sep 26 17:46:58 2019 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled  === START OF READ SMART DATA SECTION === SMART Status command failed: Connection timed out SMART overall-health self-assessment test result: PASSED Warning: This result is based on an Attribute check.  General SMART Values: Offline data collection status:  (0x82) Offline data collection activity                     was completed without error.                     Auto Offline Data Collection: Enabled. Self-test execution status:      (   0) The previous self-test routine completed                     without error or no self-test has ever                      been run. Total time to complete Offline  data collection:        (12000) seconds. Offline data collection capabilities:            (0x7b) SMART execute Offline immediate.                     Auto Offline data collection on/off support.                     Suspend Offline collection upon new                     command.                     Offline surface scan supported.                     Self-test supported.                     Conveyance Self-test supported.                     Selective Self-test supported. SMART capabilities:            (0x0003) Saves SMART data before entering                     power-saving mode.                     Supports SMART auto save timer. Error logging capability:        (0x01) Error logging supported.                     General Purpose Logging supported. Short self-test routine  recommended polling time:    (   2) minutes. Extended self-test routine recommended polling time:    ( 124) minutes. Conveyance self-test routine recommended polling time:    (   5) minutes. SCT capabilities:          (0x3035) SCT Status supported.                     SCT Feature Control supported.                     SCT Data Table supported.  SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE   1 Raw_Read_Error_Rate     0x002f   199   198   051    Pre-fail  Always       -       67   3 Spin_Up_Time            0x0027   174   173   021    Pre-fail  Always       -       2283   4 Start_Stop_Count        0x0032   092   092   000    Old_age   Always       -       8792   5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0   7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0   9 Power_On_Hours          0x0032   078   078   000    Old_age   Always       -       16789  10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0  11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0  12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       21 192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       9 193 Load_Cycle_Count        0x0032   197   197   000    Old_age   Always       -       9697 194 Temperature_Celsius     0x0022   115   102   000    Old_age   Always       -       28 196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0 197 Current_Pending_Sector  0x0032   199   199   000    Old_age   Always       -       166 198 Offline_Uncorrectable   0x0030   200   199   000    Old_age   Offline      -       76 199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0 200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       83  SMART Error Log Version: 1 No Errors Logged  SMART Self-test log structure revision number 1 No self-tests have been logged.  [To run self-tests, use: smartctl -t]  SMART Selective self-test log data structure revision number 1  SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS     1        0        0  Not_testing     2        0        0  Not_testing     3        0        0  Not_testing     4        0        0  Not_testing     5        0        0  Not_testing Selective self-test flags (0x0):   After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.  Command "Execute SMART Short self-test routine immediately in off-line mode" failed: Connection timed out 

The gnome-disks utility says that the Ext4 filesystem on the disk is undamaged, and the smartctl tests just return “connection timed out.” So what’s up?

## “Module nvidia is in use” but there are no processes running on the GPU

I am trying to configure VirtualGL, and the configuration gives the following message:

IMPORTANT NOTE: Your system uses modprobe.d to set device permissions. You must execute rmmod nvidia with the display manager stopped in order for the new device permission settings to become effective. 

When I try running rmmod nvidia (or with sudo), it says that module nvidia is in use:

rmmod: ERROR: Module nvidia is in use by: nvidia_uvm nvidia_modeset

I have already stopped my window manager by running sudo systemctl stop sddm.service, so when I check nvidia-smi it says that there are no processes running on the GPU.

Most of the threads I found on this issue are related to bumblebee, but I don’t even have it insalled.

Output of nvidia-smi:

+-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.40       Driver Version: 430.40       CUDA Version: 10.1     | |-------------------------------+----------------------+----------------------+ | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC | | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. | |===============================+======================+======================| |   0  GeForce GTX 1080    Off  | 00000000:01:00.0 Off |                  N/A | | 33%   39C    P8    12W / 200W |      9MiB /  8119MiB |      0%      Default | +-------------------------------+----------------------+----------------------+  +-----------------------------------------------------------------------------+ | Processes:                                                       GPU Memory | |  GPU       PID   Type   Process name                             Usage      | |=============================================================================| |  No running processes found                                                 | +-----------------------------------------------------------------------------+ 

Ubuntu 18.04

## How to get PIDs of processes with most network usage in descending order

I’m using my phone’s hotspot for using internet on my laptop with Ubuntu 18.04 installed. But even if I’m not doing anything, my laptop is still using the data and consumed my whole data pack within 20 minutes.

This is happening from last 3 days and I’m looking for a solution. I want to know what exactly is using this much data?

On Windows, network usage is directly visible in Task Manager. So, I guess there is an equivalent way to do the same on Linux.

I tried using ps command but I don’t think it gives details about network usage (Correct me if I’m wrong).

Also, I tried searching on stackoverflow and came across tools like iftop and many others. But I’m not able to find any details about the cause of the issue from any of the tools.

I’m not even able to install those tools mentioned on the articles found on the web/stackoverflow.

So, I want to know if there is any command that will sort the processes according to the network usage and that too without using any tools that need to be installed.

Is there any way to do this?

## Deleted snapd start up processes, is there a way to revert it?

My system was taking much boot time and so i removed three processes of snap. Now snap applications are not working and snap process is not also starting. What can i do to reset the start up processes?