What does it mean for an algorithm to converge?

I keep coming across this term when reading about reinforcement learning, for example in this sentence:

If the problem is modelled with care, some Reinforcement Learning algorithms can converge to the global optimum

http://reinforcementlearning.ai-depot.com/

or here:

For any fixed policy Pi, the TD algorithm described above has been proved to converge to VPi

http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node62.html

My understanding of the word converge is that it means several things coming together to the same point, but how can a single thing (the algorithm) do that?

If something happens in 25 % of all cases in every generation, what will the frequency converge to in the long run?

I just saw this map http://consang.net/images/c/c4/Globalcolourlarge.jpg (available on Archive.org if it would ever disappear) and become curious about how inbred people actually are.

3 scenarios:

  1. In quite a few countries around 25 % of all marriages are between cousins. If that number is constant over the generations, how inbred is the average person living in such a country?

  2. In the worst countries around 50 % of all marriages are between cousins. Same question as in (1).

  3. And the same the question for the countries with only 1 % cousin marriages.

Where do these three scenarios converge? Na├»vely I figured it should be at 50 %, 100 % and 2 % but I can’t really motivate that guess.

Does $g_n \rightarrow 0$ converge weakly?

This is where I am stuck while solving another problem.

Let $ T:L^1 \rightarrow X$ be an operator such that $ T|_{L^2(\mu)}$ is compact.

Suppose $ f_n$ be a sequence in $ L^1$ such that $ f_n \rightarrow 0$ weakly. Then set $ g_n=f_n\mathbb{1}_{A_n}$ where $ A_n=\{x : |f_n(x)|<M\}$ for some fixed $ M$ .

Then how can we say that $ \|Tg_n\| \rightarrow 0$ in norm?

It would be very nice if we could say $ g_n \rightarrow 0$ weakly.

Thanks for the help!

If $d_{n} = \frac{{\beta}_{n}}{10^n}$ where ${\beta}_{n}$ takes integer values between 0 and 9, does $\Sigma_{n=1}^{\infty}d_{n}$ converge?

The series looks like a converging geometric series: $ $ \Sigma_{n=1}^{\infty}\frac{1}{10^n} = \Sigma_{n=0}^{\infty}\frac{1}{10^n} – 1 = \frac{1}{9}$ $ Where the terms are being arbitrarily multiplied by constants between $ 0$ and $ 9$ . I’m not sure about how this affects the convergence of a series. I suspect that it converges because the constants eventually become small in comparison to the value of the geometric terms, but I’m not sure how I could prove it.

The “worst case” would be one where $ \beta_{n} = 9$ for all $ n$ . In that case:

$ $ \Sigma_{n=1}^{\infty}d_{n} = 9\Sigma_{n=1}^{\infty}\frac{1}{10^n} = 1$ $

So again I’m inclined to think it converges in any case.

Thanks.

Does integral converge?

So, I am having the integral $ $ \int_{0}^1\tfrac{|\cos(x^{-1/2})|}{2x^{3/2}}\,dx. $ $ When I put this into Mathematica, it shows me a result, but warns me that this might be incorrect if the integral should not converge… However, substituting $ y = x^{-1/2}$ I get $ $ \int_{\varepsilon}^1\tfrac{|\cos(x^{-1/2})|}{2x^{3/2}}\,dx = \int_1^{1/\sqrt{\varepsilon}}|\cos(y)|\,dy, $ $ which tends to infinity as $ \varepsilon\to 0$ . So, I think the integral does not converge. Can anybody confirm this, please?

The question is related to this one, where LinearOperator32 claims in the comments the integral converges.

Does the intercept converge if we fit a best fit line to points with prime coordinates?

A few months ago I asked this question on Mathematics Stack Exchange but it has received little attention. Perhaps the question is more applicable here.

Let $ p_k$ denote the $ k$ th prime such that $ p_1=2$ , and consider the following array of coordinates: \begin{array}{c|c}x_i&2&5&11&17&23&31&\cdots\\hline y_i&3&7&13&19&29&37&\cdots\end{array} where $ i=1,2,\cdots$ . Then $ x_i=p_{2i-1}$ and $ y_i=p_{2k}$ .

If $ y_i=\alpha+\beta x_i$ is the best fit line for these prime coordinates, does $ \alpha$ converge as $ i\to\infty$ and if so, to what value?

Note that $ \beta=1+\epsilon\to1^+$ as $ i\to\infty$ for some $ \epsilon>0$ as $ y_i>x_i$ . The following table gives the value of $ \alpha$ for $ i=10^j$ . \begin{array}{c|c}j&1&2&3&4&5&6&7&8\\hline\alpha&0.33&2.41&4.08&6.57&8.91&11.26&13.57&15.84\end{array} It may however be too early to tell whether $ \alpha$ converges as $ j\le8$ .

Does my solution converge to O(N) for worst-case time complexity?

Forgive me if this should be in StackOverflow or Mathematics instead!

I was given the following question at an interview:

Given an array of unique integers, find the first missing non-negative integer that is missing to form a consequence sequence of non-negative integers.   For example, the input [3, 4, -1, 0, 1] should give 2.  The input [1, -3, 2, 0] should give -1 (i.e no missing numbers). The input of [-2, -3, -5] should give 0 

I came up with a solution that is something akin to here: https://play.golang.org/p/b2pXr9kZYxM:

func findMissingNumber(nums []int) int{      j := 0     for i, num := range nums {         if num < 0 {             nums[i], nums[j] = nums[j], nums[i]             j++         }     }        if j >= len(nums) {         return 0     }     count := 0     for i:=j ;i < len(nums);{         count++          if nums[i] == i-j {             i++             continue         }         if nums[i]+j >= len(nums) || nums[nums[i]+j]+j >= len(nums) {             nums[i] = -1             i++             continue         }         temp := nums[nums[i]+j]         nums[nums[i]+j]= nums[i]         nums[i] = temp       }     fmt.Println(count)     for i := j; i < len(nums); i++ {         if nums[i] == -1 {             return i - j         }     }     return -1 } 

Both the interviewer and I agreed that my solution could’ve been slightly more optimized (GeekForGeek seems to agree) but we were unsure if the worst-case for my solution was $ O(N)$ or $ O(N^2)$ .

It seems like my solution has $ $ O(N) + O(N/2) + O(N/4) + O(N/6) +\cdots+ O(1) $ $ which becomes essentially: $ $ \frac{N}{2}\sum_{n=1}^{N} n^{-1}$ $

My calculus is a bit rusty (as are my algorithm skills) but I know this is a series diverges for $ N \to \infty$ . But can I say that for N sufficiently smaller than infinite (i.e Integer.Max), my worst-case becomes simply $ $ O(kN) = O(N)?$ $

An error occurred, “The application has encountered an error 403” in Elavon Converge payment API call

I am working on an HTML MVC website. Everything works fine on my local computer, but I am getting an error as soon as it’s deployed on a hosting shared server (GoDaddy) while calling the converge API post to receive token:

Error:The application has encountered an error 403 

If $f$ measurable positive, is there a sequence of simple function that converge decreasly to $f$?

A famous theorem says that if $ f\geq 0$ is measurable, then there is an increasing sequence of simple function $ (\varphi _n)$ s.t. $ \varphi _n\nearrow f$ . Then we can define $ $ \int_{\mathbb R}f:=\lim_{n\to \infty }\int_{\mathbb R}\varphi _n.$ $

Now my question is : in these conditions, (i.e. $ \geq 0$ and measurable), is there a decreasing sequence of simple function $ (\psi_n)$ s.t. $ \psi_n\searrow f$ ?

Attempts

I would say no because otherwise we could define $ $ \int f=\lim_{n\to \infty }\int_{\mathbb R}\psi_n,$ $ and monotone convergence theorem (MCT) would work as well. But I know that MCT doesn’t hold, so I guess this result is not correct. Does someone has a counter example ? And if not, does this result hold ?

How to prove a set of delay differential equations never converge (the delay is not constant)

Two functions $ x(t)$ and $ y(t)$ are coupled via: $ $ \dot X(t) = a Y(t)-b,Y(t+X(t))=X(t)$ $

where $ a<0, b$ is a constant.

I am mostly confused with the second equation. What is the mathematical term for this kind of delay differential equation? What kind of initial condition do I need to specify?

It is obvious that, if $ X(0)=Y(0)=b/a$ , then the system will stay stable for all $ t>0$ .

Through doing some simulation also back of envelope thinking, it seems that when $ X(0)=Y(0)=0$ , then the system may never converge to an equilibrium/stable solution. However, I have no idea what type of proof technique is required to show that.

Thanks!