How to make a discrete circle in a 2d array?

I’m trying to make a filled-in circle in a 2d matrix. Any value less than r is 1, and any value outside of r is zero. I’m looking for something that looks more circular as the array size grows. The circle should be centered in the array

I’m starting out with an array full of zeros, and trying to use a loop to assign values of 1, but I’m not getting anything that looks remotely like a circle.

Here’s what I’ve tried:

width = 100 height = 100 radius = 20 halfwidth = width/2 halfheight=height/2  array = ConstantArray[0, {height, width}] For[i=0, i<width, i++; For j=0, j<height, j++; If[i*i +j*j < radius*radius, array[[i + halfheight -  radius, j+halfwidth-radius]]=1]]]] 

The last line SHOULD iterate over the entire 2d array and then checks if that index falls within the circle. If True, it assigns a value of 1, else it does nothing. Obviously, it’s not doing that. This is what it produces:

Not very circular circle

It looks like it might be creating a single quadrant of the circle, but if so its in the wrong place.

So, how do I center this, and make a complete circle?

Thanks!

Discrete fourier transform in different forms

For the discrete Fourier transform, it is defined by

$ $ f(k)=\sum_{s_i}\exp(-iks)\phi(s).~~~~~~~~~~~~~~~~~(ds-1)$ $ here $ s_i=-(N-1)/2,-(N-2)/2,…..(N-1)/2$ .

For convenience, we also add the continuous Fourier transform

$ $ f(k)=\int_{-\infty}^{\infty}\exp(-iks)\phi(s)ds~~~~~~~~~~~~~~~~~(cn-1)$ $

It can be seen that for the discrete case, the integral of the right-hand side of Eq.~(cn-1) is chosen at some special point, i.e., $ s_i=-(N-1)/2,-(N-2)/2,…..(N-1)/2$ . If you take $ N\to \infty$ , the above two formulas should consistent with each other.

While for the inverse discrete Fourier transform, it reads

$ $ \phi(s)=\frac{1}{N}\sum_{k_i}\exp(iks)f(k),~~~~~~~~~~~~~~~~~(ds-2)$ $ where $ k_i=-\frac{2\pi}{N}\frac{N-1}{2},……\frac{2\pi}{N}\frac{N-1}{2}$ . Let $ N\to \infty$ , Eq.~(ds-2) can be shown by $ $ \phi(s)=?\frac{1}{N}\int_{-\pi}^{\pi}\exp(iks)f(k)dk~~~~~~~~~~~~~~~~~(ds-3)$ $ I believe Eq.~(ds-3) is wrong if you let $ N\to \infty$ .

I see that in some books, they use

$ $ \phi(s)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\exp(iks)f(k)dk~~~~~~~~~~~~~~~~~(ds-4)$ $

Question 1): how to understand the equation (ds-3) and (ds-4). Why $ N$ should be replaced by $ 2\pi$ ?

Question 2): If N is finite, we can use Eq.~(ds-1) for the discrete Fourier transform, and the inverse is given by (ds-2). While, for the discrete case, if $ N\to \infty$ , how can I get the inverse discrete Fourier transform? Can we use Eq.~(ds-2)? But If $ N\to \infty$ , it seems that Eq.~(ds-2) is not correct.

Question 3) If $ N\to \infty$ , can we use the continuous Fourier transform, i.e., $ $ \phi(s)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\exp(iks)f(k)dk~~~~~~~~(cn-2)$ $ to estimate Eq.~($ ds-2$ )? It seems that Eq.($ ds-4$ ) is different from Eq.~(cn-2). Is there some relation between Eq.~(ds-4) and Eq.(cn-2).

Question 4, whih formula is the discrete inverse Fourier tansfom in the limit of $ N\to \infty$ ?

Any suggestions or related URL or books are welcome! Thanks!

What properties of a discrete function make it a theoretically useful objective function?

A few things to get out of the way first: I’m not asking what properties the function must have such that a global optimum exists, we assume that the objective function has a (possibly non-unique) global optimum which could be theoretically found by an exhaustive search of the candidate space. I’m also using "theoretically useful" in a slightly misleading way because I really couldn’t understand how to phrase this question otherwise. A "theoretically useful cost function" the way I’m defining it is:

A function to which some theoretical optimisation algorithm can be applied such that the algorithm has a non-negligible chance of finding the global optimum in less time than exhaustive search

A few simplified, 1-dimensional examples of where this thought process came from: graph of a bimodal function exhibiting both a global and local maxima

Here’s a function which, while not being convex or differentiable (as it’s discrete), is easily optimisable (in terms of finding the global maximum) with an algorithm such as Simulated Annealing.

graph of a boolean function with 100 0 values and a single 1 value

Here is a function which clearly cannot be a useful cost function, as this would imply that the arbitrary search problem can be classically solved faster than exhaustive search.

graph of a function which takes random discrete values

Here is a function which I do not believe can be a useful cost function, as moving between points gives no meaningful information about the direction which must be moved in to find the global maximum.

The crux of my thinking so far is along the lines of "applying the cost function to points in the neighbourhood of a point must yield some information about the location of the global optimum". I attempted to formalise (in a perhaps convoluted manner) this as:

Consider the set $ D$ representing the search space of the problem and thus the domain of the function and the undirected graph $ G$ , where each element of $ D$ is assigned a node in $ G$ , and each node in $ G$ has edges which connect it to its neighbours in $ D$ . We then remove elements from $ D$ until the objective function has no non-global local optima over this domain and no plateaus exist (i.e. the value of the cost function at each point in the domain is different from the value of the cost function at each of its neighbours). Every time we remove an element $ e$ from $ D$ , we remove the corresponding node from the graph $ G$ and add edges which directly connect each neighbour of $ e$ to each other, thus they become each others’ new neighbours. The number of elements which remain in the domain after this process is applied is designated $ N$ . If $ N$ is a non-negligible proportion of $ \#(D)$ (i.e. significantly greater than the proportion of $ \#(\{$ possible global optima$ \})$ to $ \#(D)$ ) then the function is a useful objective function.

Whilst this works well for the function which definitely is useful and the definitely not useful boolean function, this process applied to the random function seems incorrect, as the number of elements that would lead to a function with no local optima IS a non-negligible proportion of the total domain.

Is my definition on the right track? Is this a well known question I just can’t figure out how to find the answer to? Does there exist some optimisation algorithm that would theoretically be able to find the optimum of a completely random function faster than exhaustive search, or is my assertion that it wouldn’t be able to correct?

In conclusion, what is different about the first function that makes it a good candidate for optimisation to any other functions which are not.

Trivial clarification with the analysis of the Dijkstra Algorithm as dealt with in Keneth Rosen’s “Discrete Mathematics and its Application”

I was going through the text, “Discrete Mathematics and its Application” by Kenneth Rosen where I came across the analysis of the Dijkstra Algorithm and felt that the values at some places of the analysis are not quite appropriate. The main motive of my question is not the analysis of the Dijkstra Algorithm in general( a better version and more clearer version exists in the CLRS text) but my main motive is analysis of the algorithm acurately as far as the mathematics is concerned, considering the below algorithm as just an unknown algorithm whose analysis is required to be done. I just want to check my progress by the fact that whether the thing which I pointed out as being weird, is actually weird or not.

Lets move on to the question. Below is the algorithm in the text.

ALGORITHM: Dijkstra’s Algorithm.

procedure Dijkstra(G: weighted connected simple graph, with all weights positive)        {G has vertices a = v[1], ... ,v[n] = z and weights w(v[j], v[j])      where w(v[j], v[j]) = ∞ if {v[i],v[j]) is not an edge in G}      for i: = 1 to n         L(v[i]) := ∞      L(a) := 0      S:=∅      {the labels are now initialized so that the label of a is 0 and all          other labels are ∞, and S is the empty set}       while z ∉ S          u := a vertex not in S with L(u) minimal          S:= S ∪ {u}          for all vertices v not in S              if L(u) + w(u, v) < L(v) then                  L(v) := L(u) + w(u, v)              {this adds a vertex to S with minimal label and updates the labels of vertices not in S}            return L(z)  {L(z) = length of a shortest path from a to z} 

The following is the analysis which they used:

We can now estimate the computational complexity of Dijkstra’s algorithm (in terms of additions and comparisons). The algorithm uses no more than $ n − 1$ iterations where $ n$ is the number of vertices in the graph, because one vertex is added to the distinguished set at each iteration. We are done if we can estimate the number of operations used for each iteration. We can identify the vertex not in S in the $ k$ th iteration with the smallest label using no more than $ n − 1$ comparisons. Then we use an addition and a comparison to update the label of each vertex not in S in the $ k$ th iteration . It follows that no more than $ 2(n − 1)$ operations are used at each iteration, because there are no more than $ n − 1$ labels to update at each iteration.

The algorithm uses no more than $ n − 1$ iterations where $ n$ is the number of vertices in the graph, because one vertex is added to the distinguished set at each iteration., What I feel is that it shall be $ n$ iterations and not $ n$ as in the very first iteration the vertex $ a$ is included in the set $ S$ and the process continues till $ z$ is inserted into the set $ S$ and $ z$ may be the last vertex in the ordering i.e.$ v_n$ .

The rest statements are fine I hope.

Can you define a ‘discrete’ language?

Are the following appropriate definitions for a formal languages over the alphabet {0,1}?

Example1: An argument w is a member of L under the following rules:

  1. If more than half its digits are 1’s –> it has to be a member of decidable language A

  2. If more than half its digits are 0’s –> it has to be a member of decidable language B

  3. If exactly half of its digits are 1’s and half are 0’s then it is not a member of the language.

Example 2: w is a member of L if:

  1. If w is longer than 10 bits it has to not be a member of decidable language A (with decidable complement) to be a member of L

  2. if w is 10 bits or less it has to be a member of decidable language B to be a member of L.

The general question: is the above ‘discrete’ form of language definition acceptable?

The same way a function can be discrete or continuous I am nicknaming this a ‘discrete’ definition for a language because based on what type of input you are, your rule (reason) for membership/non-membership can be different from other arguments’. I would assume this is ok? There does exist an argument that all discrete functions are not computable, but I don’t think this argument holds if all the inputs are of finite precision (as is the case with finite binary strings)

Average Case Analysis of Insertion Sort as dealt in Kenneth Rosen’s “Discrete Mathemathematics and its Application”

I was going through “Discrete Mathematics and its Application” by Kenneth Rosen where I came across the following algorithm of the Insertion Sort and also its analysis. The algorithm is quite different from the one dealt with in the CLRS so I have shared the entire algorithm below. Note that they have considered a machine where only comparisons are considered are significant and hence have proceeded according. The problem which I face is in the analysis portion given here in bold. Moreover the specific doubts which I have , have been pointed out by me at the very end of this question.

ALGORITHM The Insertion Sort.


procedure insertion sort($ a_1,a_2,…,a_n$ : real numbers with $ n \geqslant 2 $ )

for j:= 2 to n begin     i:=1     while aj > ai         i:=i+1     m := aj     for k:= 0 to j-i-1         aj-k := aj-k-1      ai:=m end {a1,a2,...,an is sorted}  

THE INSERTION SORT: The insertion sort is a simple sorting algorithm, but it is usually not the most efficient. To sort a list with $ n$ elements, the insertion sort begins with the second element. The insertion sort compares this second element with the first element and inserts it before the first element if it does not exceed the first element and after the first element if it exceeds the first element. At this point, the first two elements are in the correct order. The third element is then compared with the first element, and if it is larger than the first element, it is compared with the second element; it is inserted into the correct position among the first three elements.

In general, in the $ y$ th step of the insertion sort, the $ y$ th element of the list is inserted into the correct position in the list of the previously sorted $ j — 1$ elements. To insert the $ y$ th element in the list, a linear search technique is used; the $ y$ th element is successively compared with the already sorted $ j — 1$ elements at the start of the list until the first element that is not less than this element is found or until it has been compared with all $ j — 1$ elements; the $ y$ th element is inserted in the correct position so that the first $ j$ elements are sorted. The algorithm continues until the last element is placed in the correct position relative to the already sorted list of the first $ n — 1$ elements. The insertion sort is described in pseudocode in Algorithm above.

Average-Case Complexity of the Insertion Sort: What is the average number of comparisons used by the insertion sort to sort $ n$ distinct elements?

Solution: We first suppose that $ X$ is the random variable equal to the number of comparisons used by the insertion sort to sort a list $ a_1 ,a_2 ,…,a_n$ of $ n$ distinct elements. Then $ E(X)$ is the average number of comparisons used. (Recall that at step $ i$ for $ i = 2,…,n$ , the insertion sort inserts the $ i$ th element in the original list into the correct position in the sorted list of the first $ i − 1$ elements of the original list.)

We let $ X_i$ be the random variable equal to the number of comparisons used to insert $ a_i$ into the proper position after the first $ i − 1$ elements $ a_1 ,a_2,…,a_{i−1}$ have been sorted. Because

$ X=X_2+X_3+···+X_n$ ,

we can use the linearity of expectations to conclude that

$ E(X) = E(X_2 + X_3 +···+X_n) = E(X_2) + E(X_3) +···+E(X_n).$

To find $ E(X_i )$ for $ i = 2, 3,…,n$ , let $ p_j (k)$ denote the probability that the largest of the first $ j$ elements in the list occurs at the $ k$ th position, that is, that $ max(a_1 ,a_2 ,…,a_j ) = a_k$ , where $ 1 ≤ k ≤ j$ . Because the elements of the list are randomly distributed, it is equally likely for the largest element among the first $ j$ elements to occur at any position. Consequently, $ p_j (k) = \frac{1}{j}$ .If $ X_i (k)$ equals the number of comparisons used by the insertion sort if $ a_i$ is inserted into the $ k$ th position in the list once $ a_1,a_2 ,…,a_{i−1}$ have been sorted, it follows that $ X_i (k) = k$ . Because it is possible that $ a_i$ is inserted in any of the first $ i$ positions, we find that

$ E(X_i)$ = $ $ \sum_{k=1}^{i} p_i(k).X_i(k) = \sum_{k=1}^{i} \frac{1}{i}.k = \frac {1}{i}\sum_{k=1}^{i} k = \frac{1}{i}.\frac{i(i+1)}{2} = \frac{i+1}{2}$ $

It follows that

$ E(X)$ = $ $ \sum_{i=2}^{n} E(X_i) = \sum_{i=2}^{n} \frac{i+1}{2} =\frac{n^{2} + 3n -4}{4}$ $

My doubt


Now here while we are considering the calculation of $ E(X_i)$ we are first considering the probability of the maximum element between $ a_1,a_2,…,a_i$ being at position $ k$ . Then they are saying that the number of comparisons when $ a_i$ is placed into the $ k$ th position in the list $ a_1,a_2,…,a_{i-1}$ (which is already sorted) is $ k$ . Why are they considering the insertion of $ a_i$ into the position of the maximum of the elements $ a_1,a_2,…,a_i$ . $ a_i$ as per the algorithm should be placed at the first position (while scanning the array from left) when we find an element which is $ \geqslant a_i$ and not the maximum element of the sublist $ a_1,a_2,…,a_i$ .

Moveover they say that the max element of the sublist $ a_1,a_2,…,a_i$ is any arbitrary position $ k$ th and the probability of it being $ \frac{1}{i}$ . But if we see that $ a_1,a_2,…,a_{i-1}$ is sorted then the max of $ a_1,a_2,…,a_i$ is either $ a_{i-1}$ or $ a_i$ .

Capacity of a discrete memoryless channel

For an integer $ I$ , the input-output relationship of a discrete memoryless channel is given by:

$ Y = X + Z$ (mod I, i.e. sum indicates a modular addition)

where $ I ≥ 2$ , and

• X is an integer chosen from the alphabet Ax = {1,…,2I},

• Z is noise which is a uniform Bernoulli random variable. This means that Az = {0,1}, and

Pr{Z = 0} = Pr{Z = 1} = 0.5.

How can we calculate the capacity of this channel?

How does one simulate continuous gravity using a discrete timestep?

While gravity in real life is continuous, computers are limited to discrete calculations.

Therefore, a seemingly correct projectile simulation inevitably drifts off.

For example:

// Repeat once per frame position += velocity * deltaTime; velocity += gravity * deltaTime; 

Graphed, compared to the actual projectile formula Two datasets merged. While they look equal at first, the discrete data drifts off over time.

Is DISCRETE LOG a NP hard problem?

In cryptography there are two problems which are part of the foundation of modern public key cryptography. Both of them can be solved in polynomial time on quantum computers. I am talking about:

  • FACT
    Given: A composite number, i.e. a positiv integer which is the product of some prime numbers: $ x = p_1 \cdot p_2 \cdot \ldots \cdot p_n$ You know only $ x$ .
    Wanted: At least one factor of this composite number.
    Note: In cryptography the composite number is the product of exactly two primes, both of them dozens of digits long.
  • DISCRETE LOG
    Given: $ x = a^n \mod p$ . $ p$ is prime and you know $ x$ , $ a$ and $ p$
    Wanted: Find $ n$

I know, that both problems, as far as we know, are not in the complexity class $ P$ , i.e. for both problems there is no algorithm know that could solve them in polynomial time on a deterministic Turing Machine.

I know, that both problems can be solved in polynomial time on a non-deterministic Turing machine, which, per definition means, that both of them are in the class $ NP$ .

Let’s suppose, that $ P \ne NP$ . Under this assumption $ NP$ is partitioned into three sub-classes:

  • $ P$
    All problems which are solvable in polynomial time on a deterministic Turing Machine
  • $ NPC$
    NP-complete.
    The subset of $ NP$ to which all problems in $ NP$ can be reduced, i.e. the subset of $ NP$ that is NP-hard.
  • $ NPI$
    NP-intermediate
    All problems which are in $ NP$ but neither in $ P$ nor in $ NPC$ .

It is known, than $ NPI$ is not empty if $ P \ne NP$ (Ladner’s theorem).

(If $ N=NP$ , then also $ NPC=P$ , which means, that $ NPI$ must be empty.)

I know, that under the assumption that $ P \ne NP$ , FACT seems to be in $ NPI$ , since until now nobody could prove that FACT $ \in NPC$ .

But I could not find similar statements about DISCRETE LOG.

Here are my questions:

  • Is DISCRETE LOG know to be in $ NPC$ ? Or is it thought to be in $ NPI$ ?
  • If it is in $ NPI$ :
    • Is there a know algorithm to reduce FACT to DISCRETE LOG?
    • Or is there an algorithm to reduce DISCRETE LOG to FACT?
    • Are they maybe even equivalent, i.e. reduzibel in both directions?

Approximating a discrete distribution

I have a discrete distribution of reference. For the example let’s say:

  • P(X=1)=0.2
  • P(X=2)=0.7
  • P(X=3)=0.1

Now I am given n numbers, and I want to group (sum) those numbers into 3 bins and approximate as close as possible the above distribution in the sense of minimizing the sum of squared error. So let’s say I have those numbers: 10, 25, 25 50 (total sum =100). So I want to group them into 3 bins, and ideally the sum of each bin would be 20, 70, 10 and that would perfectly match the distribution. Unfortunately that’s not possible and the best here would be 25, 75 (50+25), 10. The error here is (25-20)²+(75-70)²+(10-10)²=50

What is the algorithm solving the general problem?