## Can neural network process randomness?

So the question is : Is it theoretically possible to feed a neural network with some random values to expect an output since randomness is a lack of knowledge in most case.

For this question, I’ve got some examples.

First case, not a real problem?

We just throw a coin and get the result and we do that a whole bunch of times. For each throw, we get the initial condition (air pressure, force, etc.). Now we put all this data into the neural network for it to process.

My guess : The result is not really random since it only depends on the initial conditions so it’s possible and the neural network will do a great job. So I guess that example is not a real problem since the “randomness degree” is weak.

Second case, questioning

Now, we generate a random list of number and sentences that correspond to each other so we have something like :

``'zefvkbdl' -> 1613841.009 'nfeovhlzm' -> 963478.29 'jhgcjbklnsczl' -> 1.535953 'ergz' -> 9138630.26 etc ... ``

In a way that everything was randomdly generated (still, the list of sentences and number were not generated seperatly but each number was generated after a sentence and correspond to that sentence). In that case, is it possible to give a neural network the half of the list (the list can be forever long) and expect it to predict the other half with a great precision ?

My guess : It depends on the generation algorithm but let’s pretend that a letter is just a particular index in an array and that the index was randomly generated. Since most of the time, numbers are randomly generated thanks to the digits of time (the last decimals that are changing extremely fast) I’m not sure of that – I guess it might be possible with an extremly powerful neural network to theoretically do that job.

Third case

Let’s now be even more theoretical and consider it is possible to store somehow the global state of the universe at each moment of time. The only thing that is truly random at my knowledge is quantum mechanics so let’s try it out. At each point of time, we store the whole universe state and the outcome of measuring a quantum particle state (like the spin of an electron). Is it possible, after the biggest neural network training, to “predict” the outcome of measuring a quantum particle state knowing the state of the universe ?

Since I’m just a curious student, I don’t have a lot of knowledge in neural network or quantum mechanics so I probably said a lot of wrong things and I’m sorry for that. I thank you for reading all of this and I hope someone is able to help me anwser or correct me.

Now, the real question I’m asking is : Do randomness truly exist ?

## Bounded accuracy increases randomness?

I have been reading DnD 5e for a forthcoming campaign and I have a small concern with bounded accuracy: Proficiency bonus growth in a small rate (a level 20 character has only a +6 proficiency bonus whereas in previous DnD incarnations this would be a higher bonus in almost any case). Also magic weapons were nerfed (a Holly Avenger is a +3 sword where in previous version it was a +5 sword).

All in all I think that the designed outcome is that a high level character has a lower bonus to their rolls. I have read the benefits of this and while I agree in some points my concern is that since the bonus have shrinked the d20 has a bigger weight in the action outcome.

Is this a real problem or I am overthinking? It has been addressed by game designers? How do you overcome this in your campaigns?

## PCP variant in P with non 0 randomness and polynomial proof

I am trying to show that a particular language $$L$$ in PCP(log,q) is also in P. The PCP protocol works as follows: log many random bits and checks at q positions in a polynomial length proof. The protocol accepts if two or more queries are 1. Assume that this has perfect completeness. I think one can handle randomness in P by usual methods. I am having problem with using the bound on queries to show that it is in P. How to show that $$L$$ can be recognized by a polytime algorithm? Can we somehow use maxSAT equivalent form for PCP?

## Kolmogorov randomness for Pseudo random number generator

I am working on pseudo random number generation for one of my projects. My goal is to prove that the output is almost Kolmogorov Random since Kolmogorov complexity is uncomputable. So would appreciate any help or guidance for this subject matter.

## Known cases of miners withholding blocks when used as randomness?

There are some applications that use the blockhash as a source of randomness, often with a payout similar to a lottery. I’ve heard of attacks where it would be more profitable for a miner to withhold blocks and lose the block reward, but increase the chances of them winning the lottery payout. I’ve heard this has actually happened, but never seen details.

Are there any known instances of such a withholding? Are there numbers on what the payout would need to be relative to percentage of hash-power the miner controls?