Is there performance loss in out of sequence inserted rows (MySQL InnoDB)

I am trying to migrate from a bigger sized MySQL AWS RDS instance to a small one and data migration is the only method. There are four tables in the range of 330GB-450GB and executing mysqldump, in a single thread, while piped directly to the target RDS instance is estimated to take about 24 hours by pv (copying at 5 mbps).

I wrote a bash script that calls multiple mysqldump using ‘ & ‘ at the end and a calculated --where parameter, to simulate multithreading. This works and currently takes less than an hour with 28 threads.

However, I am concerned about any potential loss of performance while querying in the future, since I’ll not be inserting in the sequence of the auto_increment id columns.

Can someone confirm whether this would be the case or whether I am being paranoid for no reasons.

What solution did you use for a single table that is in the 100s of GBs? Due to a particular reason, I want to avoid using AWS DMS and definitely don’t want to use tools that haven’t been maintained in a while.

$f(x) = \sqrt{x^{2}+1}-1$ (Loss of Significance)

Let us say that I want to compute $ f(x) = \sqrt{x^{2}+1}-1$ for small values of $ x$ in a Marc-32 architecture. I can avoid loss of significance by rewriting the function

$ $ f(x)=\left(\sqrt{x^{2}+1}-1\right)\left(\frac{\sqrt{x^{2}+1}+1}{\sqrt{x^{2}+1}+1}\right)=\frac{x^{2}}{\sqrt{x^{2}+1}+1}$ $

Even though I can solve the problem, I do not know/understand why the solution has avoid the loss of significance?

Variant of ridge regression loss function

We can create variants of the loss function, especially of ridge regression by adding more regularizer terms. One of the variants I saw in a book is given below

$ min_{w \in \mathbf{R}^d} \ \ \alpha.||w||^2 + (1-\alpha).||w||^4 + C||y-X^T.w||^2$

where $ y \in \mathbf{R^n}, s \in \mathbf{R^d}, X \in \mathbf{R^{dxn}}$ and $ C$ a regularization parameter $ \in \mathbf{R}$ and $ \alpha \in [0,1]$

My question is how does change in $ \alpha$ affects our optimization problem? and how does generally adding more regularizers help? why is not one regularizer enough?

Does regret change when the loss function is dependent on the previous predictions?

The loss function of each expert in the expert advice problem(or any online learning problem) depends on the time($ t$ ) and expert advice at that time($ f_{t}(i)$ ). suppose in this problem, loss function depends on the previous prediction of the algorithm. $ $ l _{t} (i) = p_{1} p_{2} \cdots p_{t-1}f_{t}(i)$ $ such that $ p$ show prediction of algorithm.

Does the upper bound of regret change?

Does RLNC (Random Linear Network Coding) still need interaction from the other side to overcome packet loss reliably?

I’m looking into implementing RLNC as a project, and while I understand the concept of encoding the original data with random linear coefficients, resulting in a number of packets, sending those packets, and thus the client will be able to reconstruct the original data from some of the received packets, even if a few of the packets are lost, if enough encoded packets have arrived to create a solvable equation system.

However, I have not seen any mention of the fact that if there is a non zero percent of packet loss, there is a possibility that the receiver of the encoded packets will not receive enough packets to reconstruct the original data. The only solution I see to this is to implement some type of sequencing so that the receiver can verify he hasn’t missed some packets that would allow him to reconstruct the original data, in other words interaction. Am I missing some part of the algorithm or not? If someone has solved this problem already, can you please show me where it has been solved so I can read about it?

Creating Loss Ports For Multiple Output Neural Net

I am making a multi-classfication neural net for a set of data. I have created the net but i think i need to specify a loss port at for each classification

Here are the labels for the classification and the encoder & decoders.

labels = {"Dark Colour", "Light Colour", "Mixture"} sublabels = {"Blue", "Yellow", "Mauve"} labeldec = NetDecoder[{"Class", labels}]; sublabdec = NetDecoder[{"Class", sublabels}]; bothdec = NetDecoder[{"Class", Flatten@{labels, sublabels}}]  enc = NetEncoder[{"Class", {"Dark Colour", "Light Colour", "Mixture",      "Blue", "Yellow", "Mauve"}}] 

Here is the Net

SNNnet[inputno_, outputno_, dropoutrate_, nlayers_, class_: True] :=   Module[{nhidden, linin, linout, bias},   nhidden = Flatten[{Table[{(nlayers*100) - i},       {i, 0, (nlayers*100), 100}]}];   linin = Flatten[{inputno, nhidden[[;; -2]]}];   linout = Flatten[{nhidden[[1 ;; -2]], outputno}];   NetChain[    Join[     Table[      NetChain[       {BatchNormalizationLayer[],        LinearLayer[linout[[i]], "Input" -> linin[[i]]],        ElementwiseLayer["SELU"],        DropoutLayer[dropoutrate]}],      {i, Length[nhidden] - 1}],     {LinearLayer[outputno],      If[class, SoftmaxLayer[],       Nothing]}]]]  net = NetInitialize@SNNnet[4, 6, 0.01, 8, True];  

Here are the nodes that are used for the Netgraph function

nodes = Association["net" -> net, "l1" -> LinearLayer[3],     "sm1" -> SoftmaxLayer[], "l2" -> LinearLayer[3],     "sm2" -> SoftmaxLayer[],    "myloss1" -> CrossEntropyLossLayer["Index", "Target" -> enc],    "myloss2" -> CrossEntropyLossLayer["Index", "Target" -> enc]]; 

Here is what i want the NetGraph to do

connectivity = {NetPort["Data"] ->      "net" -> "l1" -> "sm1" -> NetPort["Label"],    "sm1" -> NetPort["myloss1", "Input"],    NetPort[sublabels] -> NetPort["myloss1", "Target"],     "myloss1" -> NetPort["Loss1"],    "net" -> "l2" -> "sm2" -> NetPort["Sublabel"],    "myloss2" -> NetPort["Loss2"],    "sm2" -> NetPort["myloss2", "Input"],    NetPort[labels] -> NetPort["myloss2", "Target"]}; 

The data will diverge at “net” for each classifcation and pass through the subsequent linear and softmax layer and to the relevant NetPort The problem im having is at loss port which diverges at each softmax layer.

When i run this code

NetGraph[nodes, connectivity, "Label" -> labeldec,   "Sublabel" -> sublabdec] 

I recieve the error message: NetGraph::invedgesrc: NetPort[{Blue,Yellow,Mauve}] is not a valid source for NetPort[{myloss1,Target}].

Could anyone tell me why this occurring?

Thanks for reading.

Detecting conservation, loss, or gain in a crafting game with items and recipes

Suppose we’re designing a game like Minecraft where we have lots of items $ i_1,i_2,…,i_n\in I$ and a bunch of recipes $ r_1,r_2,…,r_m\in R$ . Recipes are functions $ r:(I\times\mathbb{N})^n\rightarrow I\times\mathbb{N}$ , that is they take some items with non-negative integer weights and produce an integer quantity of another item.

For example, the recipe for cake in Minecraft is:

3 milk + 3 wheat + 2 sugar + 1 egg $ \rightarrow$ 1 cake

… and the recipe for torches is:

1 stick + 1 coal $ \rightarrow$ 4 torches

Some recipes could even be reversible, for example: 9 diamonds $ \leftrightarrow$ 1 diamond block

If there’s some combination of recipes we can repeatedly apply to get more of the items that we started with then the game is poorly balanced and this can be exploited by players. It’s more desirable that we design the game with recipes that conserve items or possibly lose some items (thermodynamic entropy in the real world – you can’t easily un-burn the toast).

Is there an efficient algorithm that can decide if a set of recipes will:

  • conserve items?
  • lose items to inefficiency?
  • gain items?

Is there an efficient algorithm that can find the problematic recipes if a game is imbalanced?

My first thoughts are that there is a graph structure / maximum flow problem here but it’s very complex, and that it resembles a knapsack problem. Or maybe it could be formulated as a SAT problem – this is what I’m considering to code it at the moment but something more efficient might exist.

We could encode recipes in a matrix $ \mathbf{R}^{m \times n}$ where rows correspond to recipes and columns correspond to items. Column entries are negative if an item is consumed by a recipe, positive if it’s produced by the recipe, and zero if it’s unused. Similar to a well known matrix method for graph cycle detection, we could raise $ \mathbf{R}$ to some high power and get sums of each row to see if item totals keep going up, stay balanced, or go negative. However, I’m not confident this always works.

Any discussion, code, or recommended reading is very appreciated.

Converting an .rtf file to a .nb file without data loss

I unfortunately had a notebook have a syntax error and I would have lost weeks of work. To fix this, I was able to save my old notebook as a .rtf file without any excessive data loss. When I convert this .rtf to .nb by saving the file as .nb, Mathematica is able to open the notebook almost as I left it. The problem is that this results in the loss of all special characters, which I have many of. Is there any way to maintain the special characters while creating the new .nb?

What happens when Aid ends on a target who has suffered maximum hp loss?

The spell aid temporarily raises a target’s hit points and maximum hit points by 5:

Each target’s hit point maximum and current hit points increase by 5 for the duration.

A vampire’s bite, for instance, causes necrotic damage with a rider that reduces the victim’s maximum hit points:

The target’s hit point maximum is reduced by an amount equal to the necrotic damage taken

If a character under the effect of aid is bitten by a vampire (or subject to any other effect that reduces their maximum hit points), what happens to their maximum hit point total when aid subsequently expires or is dispelled? I can see two interpretations:

  1. The character’s maximum hit points drop by 5 again, so ultimately their new maximum hit point total is whatever it would normally be less the maximum hit point reduction they suffered

  2. The maximum hit point loss suffered reduces the increase granted by aid first, and maximum hit points already lost this way aren’t lost again when aid ends, so the character is up to 5 points better off than they would have been without aid

Essentially, aid might or might not act as a buffer against maximum HP loss after it expires, but I’m not sure which interpretation is best supported by the rules.