## Why ParallelTable warns where Table runs quietly?

I have a code involving `ParametricNDSolveValue` and `FindRoot` to find a trajectory [actually, trajectory of a line that everywhere is normal to magnetic field] passing through a given point {rn,zn} on the rz plane. First I call for `ParametricNDSolveValue` to find trajectory zs[z0] that starts from the point {0,z0} on the axis z:

``dzdr\$  [r_, z_] /; r < 1/10^5 = (  r (-9 Sin[2 z] + 4 Sin[4 z]))/(-9 + 9 Cos[2 z] - 2 Cos[4 z]);  dzdr\$  [r_, z_] /;    r >= 1/10^5 = (2 r (9 BesselI[1, 2 r] Sin[2 z] -        2 BesselI[1, 4 r] Sin[4 z]))/(18 r + 2 r^3 -      9 r BesselI[0, 2 r] Cos[2 z] - 9 BesselI[1, 2 r] Cos[2 z] -      9 r BesselI[2, 2 r] Cos[2 z] + 2 r BesselI[0, 4 r] Cos[4 z] +      BesselI[1, 4 r] Cos[4 z] + 2 r BesselI[2, 4 r] Cos[4 z]);  pfun = ParametricNDSolveValue[{D[z[r], r] == dzdr\$  [r, z[r]],     z[0] == z0}, z, {r, 0, 1.5}, {{z0, -(\[Pi]/2), \[Pi]/2}}]  zs[{r_?NumericQ, z0_?NumericQ}] := pfun[z0][r] ``

At the second step I define function getZ0[{rn,zn}] that calls for `FindRoot` to find starting point z0 for the trajectory that passes through a given point {rn,zn}:

``getZ0[{rn_?NumericQ, zn_?NumericQ}, z0Start_?NumericQ] := Module[   {sol}   , sol =     FindRoot[zs[{rn, z0}] - zn, {z0, z0Start, -(\[Pi]/2), \[Pi]/2}];   sol[[1, 2]]   ] getZ0[{rn_?NumericQ, zn_?NumericQ}] := getZ0[{rn, zn}, zn] ``

Finally, I want to evaluate getZ0 on a rectangular grid using `Table`:

``Table[{{rn, zn}, getZ0[{rn, zn}]}, {zn, -(\[Pi]/2), \[Pi]/2,    1/2 \[Pi]/2}, {rn, 0, 1, 0.5}] ``

This works fine. However substuting `Table` with `ParallelTable` results in sequence of Warnings of the types: `FindRoot::lstol`, `ParametricNDSolveValue::ndsz`, `InterpolatingFunction::dmval`. However both routines seemed to give same results.

To say truth, the differential equation which is solved by `ParametricNDSolveValue` is singular in two poins (where magnetic field is zero). But I wonder why there are no warnings when I use `Table` rather than `ParallelTable`?

## Training with GPU and tuning parameters with ParallelTable

We intend to train a neural network. It has a hyperparameter, such as the learning rate. We want to compare the training results of different hyperparameters. Our computer has two multi-core CPUs and an NVIDA GPU. We want to train with GPU and adjust parameters with ParallelTable. The results show that the utilization of GPU and CPU is very low, which is no different from using Table. May I ask: Is it possible to improve the utilization of GPU and speed up the parameter adjustment? The code is illustrated as follows:

``net=NetChain[{LinearLayer[], LogisticSigmoid}]; data = {1 -> False, 2 -> False, 3 -> True, 4 -> True};  AbsoluteTiming[  ParallelTable[NetTrain[net, data, All, MaxTrainingRounds -> 1000,    TrainingProgressReporting -> None, TargetDevice -> "GPU",     LearningRate -> c], {c, 0.1, 0.2, 0.1/200}];] ``

## Using more than 6 kernels in ParallelTable function

I have the following code:

``ParallelTable[  If[IntegerQ@Sqrt[80892036 + 17994 x (1 + x) (-5995 + 5998 x)], x,    Nothing], {x, 6694300, 31072325}] ``

It uses 6 kernels to compute the result. Is it possible to speed it further up, by for example using more kernels?