## What could be a good approach to compare environmental sensor data?

The premise in this case is the local proximity between the sensors. Furthermore let’s assume, that the data is provided by private Persons. One goal could be the detection of outliers, which are introduced intentionally or by mistake.

Any suggestions or papers?

## How would a companies like facebook, google and amazon approach the issue of data security in their data centers?

Such companies handle data in terabytes and i would imagine firewalls would be a bottleneck. Do they even use firewalls? virtual firewalls? IPS systems?

## Fast approach to do losummation in Compile[]?

My code does massive Summation and Matrix multiplication

Compile[] has boosted it distinctly. But I read some literatures related to my program, It seems there are approches to make it even faster. Maybe it can be improved from optimizing MMA language or algorithm itself.

My first piece code is below.

MomentComputing =   Compile[{{Mmax, _Integer}, {k, _Integer}, {image, _Real,      2}, {xLegendreP, _Real, 2}, {yLegendreP, _Real, 2}},    Block[{m, n, width, momentMatrix},    width = Length[image];    momentMatrix = Table[0., {Mmax + 1}, {Mmax + 1}];    Do[ momentMatrix[[m + 1, n + 1]] = ((2. (m - n) + 1.) (2. n + 1.)/((k k)*width width)) xLegendreP[[         m - n + 1]].image.yLegendreP[[n + 1]], {m, 0, Mmax}, {n, 0,       m}];        momentMatrix], CompilationTarget -> "C",    RuntimeAttributes -> {Listable}, Parallelization -> True,     RuntimeOptions -> "Speed"] 

It should be better if I don’t use any loop operations. But I can not figure out any other approaches. Probably matrix vector multiplication should be time-consuming as well.

Second piece.

Reconstruction =    Compile[{{lambdaMatrix, _Real, 2}, {lPoly, _Real, 2}},     Block[{Mmax, width, x, y, m, n, reconstructedImage},     {Mmax, width} = Dimensions[lPoly];     reconstructedImage = Table[0., {width}, {width}];     Do[      reconstructedImage[[x, y]] =        Sum[lambdaMatrix[[m + 1, n + 1]]*lPoly[[m - n + 1, x]]*         lPoly[[n + 1, y]], {m, 0, Mmax - 1}, {n, 0, m}]      {x, 1, width}, {y, 1, width}];     reconstructedImage], CompilationTarget -> "C",     RuntimeAttributes -> {Listable}, Parallelization -> True,     RuntimeOptions -> "Speed"]; 

Likewise, I don’t want Do[] loop here. In addition, I think Sum[] is a very slow function.

I can give all my code if necessary.

Edit 1:

According to Micheal’s suggestion, the first part is fast enough. It does not need acceleration anymore. The second part is the main time-consuming part, I believe it can speed up anyway.

## Unity3D URP – How do I approach creating Fog of War for 3D top-down stealth game?

What I try to achieve is a fog of war system for 3d top-down stealth game. I have searched the Internet and it seems that behavior I want to achieve can be done by using secondary camera that is rendering its output to a render texture. Then render texture is applied to a plane which sits above a map with a masking shader. I was following this video: https://www.youtube.com/watch?v=PNAvNeOTnSE however my project is using URP in Unity and the Clear Flags on camera settings is missing. So basically after each rendering loop render texture gets cleared and drawn again which makes impossible to "save" areas that already have been revealed by the player.

Maybe this can be done using another technique? I would like to have fog of war that covers entire rooms with black opaque plane and reveals them upon entering them but remains cleared once visited. However visited areas should be gray once left and enemies should be hidden from the player when not in line of sight. Something like in this video: https://youtu.be/jrK_Uvwwk9I?t=1247

I would really appreciate if someone pointed my in right direction.

## Weighted Online Matching. A randomized approach

Let’s consider the edge weighted online matching problem.

It is obvious, that any deterministic algorithm can’t be competitive against an oblivious adversary. Since any new edge could have arbitrary large weight.

Can a randomized algorithm improve upon this result?

## Essence of the cost benifit obtained by using “markings” in Fibonacci Heaps (by using a mathematical approach)

The following excerpts are from the section Fibonacci Heap from the text Introduction to Algorithms by Cormen et. al

The authors deal with a notion of marking the nodes of Fibonacci Heaps with the background that they are used to bound the amortized running time of the $$\text{Decrease-Key}$$ or $$\text{Delete}$$ algorithm, but not much intuition is given behind their use of it.

What things shall go bad if we do not use markings ? (or) use $$\text{Cacading-Cut}$$ when the number of children lost from a node is not just $$2$$ but possibly more ?

The excerpt corresponding to this is as follows:

We use the mark fields to obtain the desired time bounds. They record a little piece of the history of each node. Suppose that the following events have happened to node $$x$$:

1. at some time, $$x$$ was a root,
2. then $$x$$ was linked to another node,
3. then two children of $$x$$ were removed by cuts.

As soon as the second child has been lost, we cut $$x$$ from its parent, making it a new root. The field $$mark[x]$$ is true if steps $$1$$ and $$2$$ have occurred and one child of $$x$$ has been cut. The Cut procedure, therefore, clears $$mark[x]$$ in line $$4$$, since it performs step $$1$$. (We can now see why line $$3$$ of $$\text{Fib-Heap-Link}$$ clears $$mark[y]$$: node $$у$$ is being linked to another node, and so step $$2$$ is being performed. The next time a child of $$у$$ is cut, $$mark[y]$$ will be set to $$\text{TRUE}$$.)

[The intuition of why to use the marking in the way stated in italics portion of the block above was made clear to me by the lucid answer here, but I still do not get the cost benefit which we get using markings what shall possibly go wrong if we do not use markings, the answer here talks about the benefit but no mathematics is used for the counter example given]

The entire corresponding portion of the text can be found here for quick reference.

I implemented a passwordless authentication with a good UX in mind. But I am not a security expert so I am asking for your advice.

This is the authentication flow:

1. User types in email address
2. client send email to API
3. API creates User if not exists
4. API generates a short living jwt with a UUID and saves the user id and session id as claims
5. token id and session id get saved to db with a confirmed flag
6. API sends this token to the email address
7. User clicks the link on any device of choice
8. if token is valid and the claims match the data in the db the confirmed flag is set to true and a last_login field is set to the token’s iat (not really sure know if I need that ^^)
9. Meanwhile the client where the user logged in polls for confirmation and updates session if login was confirmed

## Can Apple and Google bypass the decentralised COVID-19 tracing approach (DP-3T)?

Many thoughts have been spent on creating the decentralised, minimum-knowledge contact tracing approach DP-3T. This has been implemented in contact tracing apps of several countries, e. g. the German Corona-Warn-App. Following this approach, there is no central instance that can identify users’ contact history.

However, as the apps depend on specialised APIs provided by the Google Play Services (Android) and amended iOS features, my question is:

Can Apple and Google, e. g. by logging the API usage, bypass the decentralised approach? In effect, can they create contact history profiles?

Please note: This is a theoretical question about the implementation that I do not understand. I do not imply that there is any abuse of this kind; I just wonder if it would be possible technically.

## Lectures and books for beginner to approach learning simulations

I’m an incoming undergrad with a math background up to single-variable calculus, but reasonably strong programming background through algorithms, data structures, web and mobile app development. Broadly, I’m really interested in learning to simulate molecular systems and game physics, etc, but have never simulated anything before. I’d also like what I learn to be transferrable to financial markets, climate models, etc–so really focusing on principles of simulation, analysis and motivation behind simulation algorithms, limits of simulations, and more.

I’ve gone through MIT OCW 6.0002, and spent some time learning about random walks and Monte Carlo methods, as well as frequenting a smattering of random pages on molecular dynamics, but I’d really appreciate some structured, motivated resources for an absolute beginner with a programming background to learn about using programming to simulate things.

## Minimization of amount in the change coins problem using the dynamic programing approach

I’m learning the dynamic programming approach to solve the coins change problem, I don’t understand the substitution part

Given: amount=9, coins = [6,5,1],

the instructor simplified it with this function:

minCoins = min {(9-6)+1 , (9-5)+1, (9-1) +1} = min{4, 5, 9} = 4

I don’t understand the logic of this min method: why we say that to change amount of 9 coins, we can simply, take the minimum of: 9 – {all coins} +1 ?

here’s a Gif that visualizes the instructor’s approach: https://i.stack.imgur.com/Zx2cG.gif

(*taken from the Algorithmic Toolbox course/ instructor: Prof. Pavel A. Pevzner)