Random variate generation in Type-2 computability

Is there any existing literature on applying the theory of Type-2 computability to the generation of random variates? By “random variate generator” I mean a computable function $ f\colon\subseteq\{0,1\}^{\omega}\rightarrow D$ such that, if $ p$ is a random draw from the standard (Cantor) measure on $ \Sigma^{\omega}$ , then $ f(p)$ is a random draw from a desired probability distribution on $ D$ . Think of $ f$ as having access to an infinite stream of random bits it can use in generating its output value. Note that $ f$ need not be a total function, as long as its domain has (Cantor) measure 1.

It seems to me that the way to proceed would be to require that one specify a topology on $ D$ , in fact a computable topological space [1] $ \boldsymbol{S}=(D, \sigma, \nu)$ where $ \sigma$ is a countable subbase of the topology and $ \nu$ is a notation for $ \sigma$ , and use the standard representation $ \delta_{\boldsymbol{S}}$ of $ \boldsymbol{S}$ . One might also want membership in the atomic properties $ A\in\sigma$ to be “almost surely” decidable; that is, there is some computable $ g_A\colon\subseteq\{0,1\}^{\omega}\rightarrow\{0,1\}$ whose domain has measure 1, such that

$ $ g_A(p) = 1 \mbox{ iff } f(p)\in A$ $

whenever $ p\in\mathrm{dom}(g_A)$ .

I’m working on a problem that needs a concept like this, and I’d rather not reinvent the wheel if this is a concept that has already been well explored.

[1] See Definition 3.2.1 on p. 63 of Weihrauch, K. (2000), Computable Analysis: An Introduction.

Generating trusted random numbers for a group?

Alice and Bob need to share some cryptographically-secure random numbers. Alice does not trust Bob, and Bob does not trust Alice. Clearly, if Alice generates some numbers, and hands them to Bob, Bob is skeptical that these numbers are, in fact, random, and suspects that Alice has instead generated numbers that are convenient for her.

One naive method might be for each of them to generate a random number, and to combine those numbers in some way (e.g. xor). Since they must be shared, and someone has to tell what theirs is first, we might add a hashing scheme wherein:

1) Alice and Bob each generate a random number, hash it, and send it the hash to the other (to allow for verification later, without disclosing the original number). 2) When both parties have received the hash, they then share the original number, verify it, xor their two numbers, and confirm the result of the xor with each other.

However, this has a number of problems (which I’m not sure can be fixed by any algorithm). Firstly, even if Alice’s numbers are random, if Bob’s are not, it is not clear that the resulting xor will then be random. Secondly, I’m not certain that the hashing scheme described above actually solves the “you tell first” problem.

Is this a viable solution to the “sharing random numbers in non-trust comms” problem? Are there any known solutions to this problem that might work better (faster, more secure, more random, etc)?

Two random variables that sum up to user-defined value [closed]

What I am looking for is to create an application so 6 year old can learn math. This application should generate random examples, like:

1 + 6 5 + 4 3 + 1 0 + 10 9 + 1 

The sum should never exceed 10. Cases like “9 + 1” and “1 + 9” are two different cases.

I tried to generate random numbers the following way (JavaScript):

const getRandomInt = (min, max) => {   min = Math.ceil(min);   max = Math.floor(max);   return Math.floor(Math.random() * (max - min + 1)) + min; }  const hh1 = {}; const hh2 = {}; for (let i = 0; i < 10000; i++) {   const x = getRandomInt(0, 9);   const y = getRandomInt(0, 9 - x);    if (!hh1[x]) {     hh1[x] = 1;   } else {     hh1[x] = hh1[x] + 1;   }    if (!hh2[y]) {     hh2[y] = 1;   } else {     hh2[y] = hh2[y] + 1;   } } 

But it obviously didn’t work, result is:

> hh1 { '0': 1005,   '1': 1037,   '2': 952,   '3': 951,   '4': 1048,   '5': 986,   '6': 1025,   '7': 1060,   '8': 992,   '9': 944 } > hh2 { '0': 2850,   '1': 2009,   '2': 1438,   '3': 1092,   '4': 821,   '5': 643,   '6': 488,   '7': 328,   '8': 225,   '9': 106 } 

First number looks random, the the second isn’t. Zero appears 10 times more than 9, for example. One way to fix is to generate all pairs and pick from the pair. But I’m wondering if there is a better way.

Generalization of a Markov random field and a Bayesian network?

I am seeking a graphical model that is a generalization of both a Markov random field (MRF) and a Bayesian network (BN).

From the Markov random field wiki page:

A Markov network or MRF is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can’t represent certain dependencies that a Bayesian network can (such as induced dependencies).

From the above description, particularly the last sentence, it appears that neither MRFs nor BNs are more general than the other.

Question: Is there a graphical model that encompasses both MRFs and BNs?

I believe such a graphical model will need to be directed so as to be able to model the (undirected) dependencies in a MRF (by included a directed edge in each direction).

How can I handle players who want to browse shops at random?

If one or more of my players decide to go on a shopping spree, I’ve previously had problems describing the shop’s inventory, without going into much detail or making it seem like the shop has only 2 items.

Of course, I can describe the atmosphere and the general nature of the inventory (e.g. “herbs” or “jewelry”), and that’s just fine if the players walk into the shop looking for something specific, such as an herb that stops bleeding or a silver necklace with a sapphire embedded into it.

However, I’m unsure how to handle players that recently noticed “Hey, I’ve got 500 gold floating around that I want to spend on useless stuff”. Or, in other words, players that want to look around for random shops with a random inventory to see if they find something of interest.

In the real world, this works, because you can literally walk into a random shop and look around to see if there’s anything interesting. In D&D, the DM has to come up with something, and it’s boring and frustrating for the players if it’s always the same things.

So, what can I do to make random shopping interesting for the players, without for example preparing huge inventory lists in advance?

Why my mobile sends SMS to a random number without my permission?

My mobile sends SMS to random numbers without my permission. All messages are sent automatically. Here is an example of an SMS.

SBIUPI qUrXgeX26iEY%2B2si9JHhubIjm7R2aHoo6pWcbXBpJho%3D

All the receivers are Indian numbers (mostly airtel sim card). When I check the Truecaller app I found those numbers are named Cybercrime Frauds and reported as spam by more than 18000 peoples.

I have checked my app SMS permissions and checked for hidden apps. I couldn’t find anything harmful.

Is there anything to afraid?

Anonymous (privacy-preserving) random walks for graphs

Quoting this paper – SmartWalk (https://dl.acm.org/doi/pdf/10.1145/2976749.2978319):

For graph privacy, strong link privacy relies on deep perturbation to the original graph, indicating a large random walk length. However, as the fixed random walk length increases, the perturbed graph gradually approaches to a random graph, incurring a significant loss of utility.

They propose a machine-learning based approach to determining the appropriate random walk length as a trade-off between utility and security/privacy. However, is there (at all) an anonymous or privacy-preserving method of conducting a random walk itself?

Random walk problem in matlab

Can you write to me code matlab to this problem? I try many times and dont succses Problem 2.1. First-passage time. The first-passage time (FPT) is defined as the time it takes for a random walker to reach a certain target position for the first time. Consider a special case of the first-passage time: the first-return time, i.e. the time at which the random walker first returns to the origin. Let us denote the probability that this would happen at time t as F(t). Using your random walk data from the previous week (or generating new data, if you prefer), please make a histogram of first-return times of 105 symmetric random walks with step size ∆x = 1. Please do it for random walks whose duration is 104 steps, and also 105 steps. Plot the probability F(t) as a function of time. Now do the same but on a log-log scale. Use this to infer how F(t) falls with time for t >> 1.

Amount of expected loop iterations when searching an array by random index

Lets say we have an array A of size n. It has 1 as its first index and n as its last index. It contains a value x, with x occurring k times in A where 1<=k<=n

If we have a search algorithm like so:

while true:   i := random(1, n)   if A[i] == x     break 

random(a,b) picks a number uniformly from a to b

From this we know that the chances of finding x and terminating the program is k/n with each iteration. However what I would like to know is what would be the expected value for the number of iterations or more specifically the amount of times the array was accessed in this program given the array A as described above.