Should Reduce give all cases when $\sqrt{x y} = \sqrt x \sqrt y$?

My understanding is that Reduce gives all conditions (using or) where the input is true.

Now, $ \sqrt{xy} = \sqrt x \sqrt y $ , where $ x,y$ are real, under the following three conditions/cases

$ $ \begin{align*} x\geq 0,y\geq0\ x\geq0,y\leq0\ x\leq0,y\geq 0 \ \end{align*} $ $

but not when $ x<0,y<0$

This is verified by doing

ClearAll[x,y] Assuming[Element[{x,y},Reals]&&x>= 0&&y<= 0,Simplify[ Sqrt[x*y] - Sqrt[x]*Sqrt[y]]] Assuming[Element[{x,y},Reals]&&x<= 0&&y>= 0,Simplify[ Sqrt[x*y] - Sqrt[x]*Sqrt[y]]] Assuming[Element[{x,y},Reals]&&x<= 0&&y>= 0,Simplify[ Sqrt[x*y] - Sqrt[x]*Sqrt[y]]] Assuming[Element[{x,y},Reals]&&x<= 0&&y<=  0,Simplify[ Sqrt[x*y] - Sqrt[x]*Sqrt[y]]] 

Mathematica graphics

Then why does

 Reduce[ Sqrt[x*y] - Sqrt[x]*Sqrt[y]==0,{x,y},Reals] 

Give only one of the 3 cases above?

Mathematica graphics

Is my understanding of Reduce wrong or should Reduce have given the other two cases?

V 12 on windows.

How to reduce the time of calculation using ParalleDo instead of Do Loop?

I am trying to use ParalleDo function through ParametricNDSolveValue and NIntegrate.

The question is pretty similar how to apply paralleDo to my code

First I intend to use Do loop and its works fine but It cost a longer time, so to reduce the computational time I want to use ParalleDo, but I get an error message and I don’t understand where it comes from.

Moreover, I intend to write the output data in data file formate by using

  file = OpenWrite[        "file1.dat",         FormatType -> TableForm]; 

But I got the following error OutputStream[file1.dat ,3] is not open.

Below is my minimal working code and some comments:

 l1 = 0.81;         Z = 1500;         x0 = 10;         v0 = 0.02;         \[Epsilon] = $  MachineEpsilon;          l0 = 0.0714`20.;          ps = ParametricNDSolveValue[{y''[r] +                2 y'[r]/r == -4 \[Pi] l k Exp[-y[r]], y[\[Epsilon]] == y0,              y'[\[Epsilon]] == 0, WhenEvent[r == 1, y'[r] -> y'[r] + Z l]}, {y,              y'}, {r, \[Epsilon], R}, {k, l},             Method -> {"StiffnessSwitching"}, AccuracyGoal -> 5,             PrecisionGoal -> 4, WorkingPrecision -> 15];        file = OpenWrite[            "file1.dat",             FormatType -> TableForm];         ParallelDo[x = i x0;           v = i^3 v0;           R = Rationalize[v^(-1/3), 0];           l = Rationalize[l1/(i x0), 0];           nn = FindRoot[Last[ps[y0, l]][R], {y0, -1}, Evaluated -> False][[1,              2]];           Tot = 4 \[Pi] nn NIntegrate[              r^2 Exp[-First[ps[nn, l]][r]], {r, \[Epsilon], R},               PrecisionGoal -> 4];           Print[NumberForm[i*1., 5], "  ", NumberForm[Tot, 5]];, {i, 292/100,             31/10, 1/100}] // Quiet // AbsoluteTiming Close[file]; 

What can I do to prevent or reduce message loss in a microservices system?

Quite often I have methods that do the following:

  1. Process some data
  2. (frequent, but optional) Save some state to database
  3. Publish a message to a queue / topic

What options do I have to protect myself against transient errors (but not only transient) with #3? Implementing a retry / repeat mechanism is one approach, but it probably won’t work if the issue that prevents the message from being sent lasts longer than a few seconds or a few minutes.

Are AWS security groups enough to segment network and reduce PCI scope?

I was reading this paper

https://d1.awsstatic.com/whitepapers/pci-dss-scoping-on-aws.pdf

It shows this image

enter image description here

Am I correct in saying that – as long as instances have proper security groups that restrict connectivity, it will remove them from PCI scope?

On an additional note – is it just me that finds it awfully difficult to get best practice for PCI within cloud environments – seems a bit all over the place.

Reduce BIG O of code [on hold]

I have a code here and it runs perfectly but for some reason it’s slow and it times out on codewars challenges. Is the a way to make it faster. or my approach is wrong.

function idBestUsers() {     let frequency = {}     let saleList =[]     let temp = []     let tempSale =0     let count  = 0     let purchase =0     let data = Array.from(arguments)     let users = new Set(data.reduce((a, b) => a.filter(c => b.includes(c))))     if(users.length === 0) return []     for (let user of users){       for (let item of arguments){         frequency[user] = item.filter(a =>{ return a=== user}).length + purchase         purchase = frequency[user]          }         purchase  =0      }     sortFrequency = Object.keys(frequency).sort(function(a,b){return frequency[b]-frequency[a]})     let i = 0     let j = 1     while (i < sortFrequency.length){        if (temp.length === 0) temp =[frequency[sortFrequency[i]],[sortFrequency[i]]]        if ( frequency[sortFrequency[i]] != frequency[sortFrequency[j]]){            saleList.push(temp)            temp =[]         }else{           temp[1].push(sortFrequency[j])           temp[1].sort()         }         i++         j++     }     return saleList; }

Mathematical Techniques to Reduce the Width of a Gaussian Peak

In the chemical analysis by instruments, the signals of several molecules are overlapped which makes it difficult to determine the true area of each peak, such as those shown in red. I simulated this as a sum of six Gaussians (with some tailing elements)

  1. One of the simplest technique is to raise the discrete signal values to any positive power (n>0). The standard deviation of the Gaussians becomes smaller and smaller (C, in blue). The big drawback is that we lose all the original peak area information. The transformed data is highly resolved now at the cost of losing true area information.

  2. Alternatively, we can add a first derivative of the signal and subtract the second derivative from the original signal i.e., Sharpened signal= Original signal +K (first derivative) – J(second derivative)

K and J are small positive real numbers. This neat “trick” maintains the true area because area under the derivatives is negligible (zero in ideal cases).

Do mathematicians use any other transformations which can make each overlapping peak very narrow, yet maintain the original peak areas. I am not interested in curve fitting techniques at this moment. Any pointers to some similar “peak sharpening” transformations would be appreciated which can resolve overlapping signals.

Thanks.

enter image description here

Inventory – how to add range of items and reduce the probability of human error?

Problem: I have an inventory that fills one purpose -> insert the same type of item (dongles) which only varies it’s serial number. Every month someone buys a box which comes with a perfect range of numbers.

enter image description here

So, a sequence usually starts like 2196 and the final number is 2396 (if 200 units are purchased).

My dev is reluctant to add any type of editing to prevent human error for two reasons:

1-it’s done 4 times a year 2-if someone has ability to delete the record it will mess up with a reference, and by doing that there is no reference, unless it’s build a history (more work) 3-the project is too big and we need to ship

I don’t agree with him because users including highly trained ones makes input mistakes. We are using a Jquery library for editing which is not a big deal to activate (I can do it myself) but it involves cost with testing and backend.

This is what I came up with but I still think that this is not ideal:

enter image description here

Right now I have this popup to insert:

What would be the best approach (cheap cost) to reduce human error input in this case?

Note: the status is changed dynamically based on when the dongle is activated. The purpose of the inventory is to record it’s first entry before it’s sold to final customer and track if it was stolen or lost.

How to reduce entropy?

This is not necessarily a research question, since I do not know, if someone is working on this or not, but I hope to gain some insight by asking it here:

The idea behind this question is to attach to a natural number in a “natural” way an entropy, such that “multiplication increases entropy”. (Of course one can attach very different entropies to natural numbers such that multiplication reduces entropy, but I will try to give an argument, why this choice is natural.) Let $ n$ be a composite number $ \ge 2$ , $ \phi$ the Euler totient function. Suppose a factorization algorithm $ A$ outputs a number $ X$ , $ 1 \le X \le n-1$ with equal probability $ \frac{1}{n-1-\phi(n)}$ such that $ 1 < \gcd(X,n) < n$ . Then we can view $ X$ as a random variable and attach the entropy to it $ H(X_n):=H(n):= \log_2(n-1-\phi(n))$ .

The motivation behind this definition comes from an analogy to physics: I think that “one-way-functions” correspond to the “arrow of time” in physics. Since “the arrow of time” increases entropy, so should “one-way-functions”, if they exist. Since integer factorization is known to be a candidate for owf, my idea was to attach an entropy which would increase when multiplying two numbers.

It is proved here ( https://math.stackexchange.com/questions/3275096/does-entropy-increase-when-multiplying-two-numbers ) that:

$ H(mn) > H(n) + H(m)$ for all composite numbers $ n \ge 2, m \ge 2$

The entropy of $ n=pq$ , $ p<q<2p$ will be $ H(pq) = \log_2(p+q-1) > \log_2(2p-1) \approx \log_2(2p) = 1+\log_2(p)$ .

At the beginning of the factorization, the entropy of $ X_n$ , $ n=pq$ will be $ \log_2(p+q-1)$ since it is “unclear” which $ X=x$ will be printed. The algorithm must output $ X=x$ as described above. But knowing the value of $ X$ , this will reduce the entropy of $ X$ to $ 0$ . So the algorithm must reduce the entropy from $ \log_2(p+q-1)$ to $ 0$ . From physics it is known, that reducing entropy can be done with work. Hence the algorithm “must do some work” to reduce the entropy.

My question is, which functions do reduce entropy? (So I am thinking about how many function calls will the algorithm at least make to reduce the entropy by the amount described above?)

Thanks for your help!