Asymptotic notation and random variables

I have two random variables $ X$ and $ Y$ and I want to bound the value of one in terms of the other (for now, I don’t care about the actual distribution of their values).

Suppose that the two variables can have different distributions with values chosen from $ [1, n]$ . But $ X$ is always upper bounded by $ Y \cdot \log{n}$ . Can I write this as $ X = O(Y\log{n})$ (if I care about the behavior for large $ n$ ). I’m not sure what is the convention wrt to random variables and asymptotic notation.

Little-oh notation proof help

I really need help solving the following question:

Given: $ $ f(n) = o(g(n))$ $

Proove: $ $ 3^{f(n)} = o(3^{g(n)})$ $

My attempt:

I know that $ \frac{f(n)}{g(n)} \xrightarrow{} 0 $ .

I need to prove that $ f(n) – g(n) \xrightarrow{} -\infty$ so that $ 3^{f(n)-g(n)} \xrightarrow{} 0$ .

How do I prove that?

In general, I’m not sure what properties can I assume about $ f$ and $ g$ . Are they positive? What else do I know about them? Are we assuming that all these function approache $ \infty$ or can they be $ \frac{1}{n}$ etc?

Thanks a lot

The role of asymptotic notation in $e^x=1+𝑥+Θ(𝑥^2)$?

I’m reading CLRS and there is the following:

When x→0, the approximation of $ e^x$ by $ 1+x$ is quite good: $ $ e^x=1+𝑥+Θ(𝑥^2)$ $

I suppose I understand what means this equation from math perspective and, also, there is an answer in another cs question. But I don’t understand some things, so have a few questions.

  1. Why do they use $ Θ$ here and why do they use $ =$ sign?
  2. Is it possible to explain how the notation here is related to the author’s conclusion that $ e^x$ is very close to $ 1 + x$ when $ x \rightarrow 0 $ ?
  3. Also, how is it important here that $ x$ tends to $ 0$ rather than to $ \infty$ as we usually use asymptotic notations?

I’m sorry if there are a lot of questions and if they are stupid, I’m just trying to master this topic.

Algorithm complexity or Big ‘O’ Notation Calculator in IDE

I am wondering if there is some Big ‘O’ Notation Calculator. I tried to search to online but I could not find any. I believe that Big ‘O’ Notation can fit to productivity tools to improvise the performance of an algorithm by recommending the possible hot-spots. Isn’t it?

I would like to know if we have any such tools, integrated with IDE.

Reasons: Majority of people living up to their assumptions (or possibly, biased) to calculate Big ‘O’ Notation for their algorithm. Anything which is not streamlined or not automated could be half-baked since it passes thru people as rumors.

P.S: Please move this question to different site if not appropriate. I welcome prompt answers here; not spam. Do not kill it without answers.

Could we define the decimal notation of a natural number as an expression of multiplication and addition of single digit numbers?

We recognize that every natural number can be expressed as 10 times a natural number plus a number from 0 to 9. Take any natural number and express it that way in the form $ (10 \times x) + y$ where $ y$ is between 0 and 9. Again for $ x$ , express that number that way and keep going until the second term of the expression is 0. This shows that every natural number can be gotten by starting from 0 and repeatedly applying operations each of which is of the form of multiplying by 10 then adding a number from 0 to 9. It seems so intuitive to define the decimal notation of a number by the method of getting it in that way where you start from 0 then multiply by 10 and add the first digit then multiply by 10 and add the second digit and so on. For example, we can define the notation 122 to literally mean $ (10 \times ((10 \times ((10 \times 0) + 1)) + 2)) + 2$ .

I know that’s probably not actually the way it was defined but it turns out to be correct. Since it is correct anyway, maybe it is better to define it that way as an instruction to submit into computers. It’s not that hard to show that that definition agrees with the conventional definition. Some people might have a demand to understand why proven math results such as the statement that long division to get a quotient and remainder works but have so many things to learn and don’t want to bother understanding why that definition agrees with the conventional definition. Despite that, could we still define it that way without creating a problem for those people? They might figure out a proof that long division to get a quotient and remainder works using that definition of decimal notation works and then be like “That’s fine, I accept it only as a proof that long division works using that definition of the decimal notation of a natural number and not as proof that long division works using the conventional definition of the decimal notation of a number.”

Also, could we treat English like Python and and say something means something because we defined it to mean that, and then use Polish notation to describe expressions with symbols we just invented a meaning of? Let’s say we already have a meaning for 0 which is ∅ and can also express the successor operation $ S$ and the addition operation + in Polish notation so $ 2 + S(2 + 2)$ would be written $ +SS∅S+SS∅SS∅$ . Next we define $ 0 = ∅, 1 = S∅, 2 = SS∅, 3 = SSS∅, 4 = SSSS∅, 5 = SSSSS∅, 6 = SSSSSS∅, 7 = SSSSSSS∅, 8 = SSSSSSSS∅, 9 = SSSSSSSSS∅, X = SSSSSSSSSS∅$ . Now, the number 122 can be described in Polish notation as $ +×X+×X+×X∅122$ . If you decide to replace the operations of right addition with left addition of the same number, then the Polish notation is $ +2×X+2×X+1×X∅$ . If you have a Python like program you can then again redefine $ 0 = +0×X, 1 = +1×X, 2 = +2×X, 3 = +3×X, 4 = +4×X, 5 = +5×X, 6 = +6×X, 7 = +7×X, 8 = +8×X, 9 = +9×X$ so now the digits are defined as operations so you can now type in $ 221∅$ to mean 122. Also, $ ×2∅8001∅$ could be the way to write 2 × 1008. Although some people use the symbol ∅ in place of the symbol 0, in this case 0 and ∅ don’t mean the same thing it at all. I originally defined 0 to mean the same thing as ∅, the number zero. Later, I redefined the meaning of all the digits to be operations where as ∅ still kept its original meaning, and the new notation for a natural number does not end until you get to the character ∅ making the multiplication expression unambiguous. Then those picky mathematicians will be satisfied with knowing that computer programs programmed that way can use the following law to compute the quotient and remainder of a division problem of natural numbers using the fact that the quotient and remainder of division of a one number by another number can be determined from the quotient and remainder of the floor function of a tenth of the former by the latter. There might be a lot of them who are picky because other mistakes in computer programs have happened such as the Chess Titans glitch in the YouTube video Windows Vista Chess Titans Castling Bug. They will consider long division of natural numbers to get a quotient and remainder to be entirely a function from ordered pairs of natural numbers to ordered pairs of natural number, and will not accept that as proof of how to compute the quotient as a rational number given in mixed fraction notation.

Range notation and current row

I wonder how does work the following and is it documented somewhere?

I have the sheet:

A         B ----------------- 1 2 3         =$  A:$  A 4 5         =$  A:$  A 

For column B (row 3 and 5) I see corresponding values from column A. It looks like =$ A:$ A (or even =$ A$ 2:$ A$ 5 etc) works like =$ A$ 3 and =$ A$ 5 in some automagical manner (and no needs to specify different formulas for every cell in B).

I like such behaviour but wish to be sure that it is a reliable solution.

Alternate notation for natural numbers and their addition?

The conventional way of expressing natural numbers uses base 10 notation which according to https://matheducators.stackexchange.com/questions/4367/how-to-teach-binary-numbers-to-5th-graders, some people don’t really fully understand how works. Not only that but the notation 2 + 2 + 2 is ambiguous. It could mean (2 + 2) + 2 or 2 + (2 + 2). I know that since addition is associative, all ways of bracketing any addition expression will always give the same answer but that doesn’t change the fact that the expression still has 2 different meanings. Some people refuse to take for granted that both meaning give the same answer just because people sometimes write it without brackets. To add to the extra confusion, we write 2 + 2 and not (2 + 2) and be like “How can I figure out what 2 + (2 + 2) means when I don’t even know what (2 + 2) means?”

The expression 2 + 2 $ \times$ 2 on the other hand can have a different meaning depending on the interpretation unless you use PEDMAS to decide that it can only mean one of those meanings. To make matters worse, some people may get confused by the type of calculator that I think treats all expressions as meaning the left associative expression and rely on its answer being the correct answer using the PEDMAS rule when it’s actually not. I think that because of many confusions similar to those ones, some mathematicians have a demand for formalization and understanding why statements are true. Some mathematicians might insist on having a simple to describe unambiguous notation for natural numbers and all addition expressions of them.

I have an idea for on that I’m wondering if is good. Let 0 denote the natural number 0 and $ S$ denote the successor operation. Now we define the notation for 0 to be $ 0$ , the notation for 1 to be $ S0$ , the notation for 2 to be $ SS0$ and so on. Next to express a sum of 2 natural numbers, we write + followed by the notations for each natural number so 2 + 2 can be represented as $ +SS0SS0$ . For any 2 notations you already constructed, we can also denote their sum as + followed by the notation for the first expression and then the notation for the second expression so (2 + 2) + 2 can be denoted $ ++SS0SS0SS0$ and 2 + (2 + 2) can be denoted $ +SS0+SS0SS0$ . Furthermore, it can be shown that no string of the characters +, $ S$ , and 0 has more than one meaning. Not only that but it can also be shown that given any meaningful expression, sticking more characters onto the end will never give you a meaningful expression. Also when ever you start with the empty string and keep sticking another character onto the end, there’s a nice simple way to compute whether or not you have yet completed a meaningful expression.

Programmaticaly finding the Landau notation (Big O or Theta notation) of an algorithm?

I’m used to search for the Landau (Big O, Theta…) notation of my algorithms by hand to make sure they are as optimized as they can be, but when the functions are getting really big and complex, it’s taking way too much time to do it by hand. it’s also prone to human errors.

I spent some time on Codility (coding/algo exercises), and noticed they will give you the Landau notation for your submitted solution (both in Time and Memory usage).

I was wondering how they do that… How would you do it?

Is there another way besides Lexical Analysis or parsing of the code?

This question concerns mainly PHP and or JavaScript, but I’m opened to any language and theory.