## Weird processes hogging lots of resources on my MacBook

This is an old screenshot but basically these two processes (eriocaulaceae and stahlhelmer) just popped out of nowhere, and I couldn’t find any information online. Just now they were hogging 99% of CPU but it disappeared when I opened stack exchange tab to post this question.

Any help identifying these two processes would be greatly appreciated.

I have mid 2012 MacBook Pro with 10.14.5 Mojaves.

## Training Documentation / Company Processes [on hold]

I work in a Saas company with a huge system, and set of processes. We have a wiki, lots of resources on google drive and information / documentation stored in a large number of random places (including some individuals’ own minds).

I am trying to develop a one stop shop knowledge base for our engineers…Initially the ideas was a resource that can be used for training new hires but as we started to discuss it, we realised how badly we need it to be the reference point for all existing employees. I work on a specific engineering team so will create it for our team initially and build out from there.

This is going to be a HUGE task as there is so much to gather together but I am looking for some recommendations to get started, specifically if anybody has good ideas / resources for how we could create this one stop shop and

• Include drill-down links to more detail and eventually down to code
• List / link to tools available to engineers
• A live area where individuals can add ‘troubleshooting tips’ (for issues they solved in our live environment)
• Information on how our section of the system integrates with other teams

Or just ideas / recommendations on how these things are managed in companies you’ve worked for.

## Ergodicity criterion for multivariate gaussian processes

Let $$[0, +\infty) \ni t \mapsto X(t) \in \mathbb{R}^d$$ be a stochastic process over a probability space $$(\Omega, \mathcal{F}, \mathbb{P})$$.

Suppose that $$X$$ is strictly stationary and gaussian.

Is there a sufficient condition that guarantees that $$X$$ is ergodic?

(Ergodicity in the sense that: $$\frac{1}{T}\int_0^T \varrho(X(t)) dt \to \mathbb{E}[\varrho(X(0))]$$ for all bounded measurable $$\varrho$$)

If $$d=1$$ there is a famous criterion which considers the decay of the two-point correlation function. Is it clear how to extend this to higher dimensions?

My first idea was to prove that any one-dimensional projection is ergodic (by applying the criterion above). Then the problem is that I do not know how to lift this to the entire process, as there are elementary examples where joint ergodicity does not follow, even under the hypothesis of independent one-dimensional ergodicity. It might be that gaussianity helps in a way I do not see.

## Apache prefork module. Processes not being forked under heavy load

I have an apache prefork module http server running on linux machine. The machine has 8GB RAM. I have following in my /etc/httpd/conf/httpd.conf:

<IfModule prefork.c>     StartServers       8     MinSpareServers    5     MaxSpareServers   20     ServerLimit      512     MaxClients       512     MaxRequestsPerChild  4000 </IfModule> 

The problem is that no more child processes are getting forked after 256 and the requests are getting queued. I can see the number of child processes stuck at 256 under heavy load.

The average memory of a httpd process is aboout 3.69 MB.

## Software build processes – Dependencies without admin privileges

we have the following problem and I would like to hear your opinions for this case:

Currently several users are working with domain accounts which are in the local admin group (Yes, shame…)

You need this for build processes, because you have to work on certificates in the user store (with private keys, not exportable) as well as the Windows Credential Store. Additionaly there is a need to move certificates.

Now the environment should be made more secure and the domain accounts should be removed from the admin group. The users should get a local admin account to be able to use it for starting the application in the admin context.

Now the users have no possibility to access the user store (certificates and credential store), because the application runs in a different user context.

According to the people it means a considerable additional effort to adapt the build processes. Furthermore, the storage of certificates in the machine store is not considered safe.

What is your opinion? Can this be solved in a smart way?

## Allow access to /proc/self for processes that get started from PHP

to run an application who allow to sign pdf online i need to allow access to “/proc/self” for processes that get started from PHP.

How to do that from plesk + ubuntu ? Is a safe option for the vps ?

I think that the command should be

mount –bind /proc /var/www/vhosts/…/websitepath

But i don’t really know if it’s a good move or nope :/ any help?

## Allow access to “/proc/self” for processes that get started from PHP

i’ve a debian vps with plesk. To run an application who allow to sign pdf online i need to allow access to “/proc/self” for processes that get started from PHP.

How to do that from plesk ? Is a safe option for the vps ?

I think that the command should be

mount –bind /proc /websitepath

But i don’t really know if it’s a good move or nope :/

Thanks

## Counting number of non-sleeping processes of a given user

I would like to count the number of non-sleeping processes started by a given user. I know that the sleeping processes are the ones with “S” or “D” in their ps status variable. I also know I can count processes from a list containing variables user and state by:

ps -e -o user,state | grep -c 'username'

Similarly, I know I could count the sleeping processes by:

ps -e -o user,state | grep -c 'S|D'

However, I can’t figure it out how to use both information to count the non-sleeping processes started by the user username.

## Bound and approximation of function of stochastic processes

I got two issues that seem very easy on first sight, but I got problems proving them. I have two pairs of stochastic processes $$\{X_{n,j}(t_j) : t_j \geq 0 \}$$ and $$\{Y_{n,j}(t_j): t_j \geq 0\}$$ for $$j=1,2,$$ and can suppose that for both $$j$$ they satisfy

$$\vert X_{n,j}(t_j) – Y_{n,j}(t_j) \vert \leq C_j t_j^{1/2 – \beta_j}$$ for some $$\beta_j > 0$$

and (under some more regularity conditions)

$$\sup \limits_{t_j \in [0,1]} \vert X_{n,j}(t_j) – Y_{n,j}(t_j) \vert = o(1)$$ as $$n \to \infty$$.

Now I want to verify if also $$\vert \sum_{j=1}^2 X_{n,j}^2(t_j) – \sum_{j=1}^2 Y_{n,j}^2(t_j) \vert \leq \sum_{j=1}^2 C_j t_j^{1/2 – \beta_j}$$

and

$$\sup \limits_{t_1,t_2 \in [0,1]} \vert \sum_{j=1}^2 X_{n,j}^2(t_j) – \sum_{j=1}^2 Y_{n,j}^2(t_j) \vert = o(1)$$ as $$n \to \infty$$

holds. This seems very simple at first, since I only use the continous function $$(x,y) \mapsto x^2+y^2$$ here, but the continous mapping theorem doesnt seem to be the correct way to prove this. Can anyone lead me into the right direction?

Thank you!

## mds processes are constantly crashing

Mojave 10.14.4 here — mds_worker processes constantly crashing and re-spawning as per the logs. The ‘indexing’ spotlight bar stops at various different points, always getting stuck, though in different places apparently. I’ve rebuilt the database many times (using mdutil -i on/off, -a -E, etc., as well as the GUI method). The database is clearly not complete – searching filenames that I can literally see turns up nothing in some cases and finds it in others. Mail isn’t returning good search results either.

In-place OS reinstall has been done; didn’t clear up the issue.

Apple is now recommending I install Mojave on a separate partition, boot into it and see if it has the same issue – don’t see how that’s going to help (yet), but I have to do it in order to move the process forward. I’ll update with any findings once I’ve done that.

It seems like it’s choking on some piece of data I have, and I need to figure out what that is so I can remove it.

It seems like many md_worker processes get spawned from a parent — and the process that crashes does so, so quickly that I can’t don’t have time to do an lsof -p on it.

Is there another way to do this, or to get more information about the file(s) it’s choking on from log?