My best guess is that the series $ $ \sum_{i=1}^n i(n-(i-1)) $ $ becomes $ $ 2 \Bigg[ n + 2(n-1) + … + \frac{n}{2} \bigg(n-\bigg(\frac{n}{2}-1\bigg)\Bigg)\Bigg] $ $ So the highest term is $ n^2$ and there are $ n$ terms. Does that mean its $ O(n^3)$ ? That seems high. Intuitively, it seems like it should be closer to $ O(n^2)$ but I can’t find a way to bring it down mathematically.

# Tag: bound

## Finding the upper and lower bound using binary search

I am trying to solve this InterviewBit question https://www.interviewbit.com/problems/search-for-a-range/.

My code for this seems to be working properly and it does give me the correct answers on my IDE and on all the sample test cases online but it gives me a TLE on the online judge. It runs in O(log n) time, just a variation of simple binary search.

I am trying to understand what aspect of my code could be made more efficient and faster.

Here’s my code:

`int findBound(vector<int> a, int b, bool lower){ int low=0, high=a.size()-1, mid; while(low<=high){ mid=low+(high-low)/2; if(a[mid]==b){ if(lower){ if((mid != (a.size()-1)) && a[mid+1]==b) low=mid+1; else return mid; } else{ if(mid != 0 && a[mid-1]==b) high=mid-1; else return mid; } } else if(a[mid]>b) high=mid-1; else if(a[mid]<b) low=mid+1; } return -1; } vector<int> Solution::searchRange(const vector<int> &A, int B) { vector<int> ans(2); ans[0]=findBound(A, B, true); ans[1]=findUpper(A, B, false); return ans; } `

## Multivariable Calculus – Change of Bound Help!

I am having trouble following these steps in a reading on multivariable calculus.

`Due to a change of variables: `

$ \displaystyle\int_t^T \int_t^s \theta_v dv \, ds = \int_t^T \int_s^T \theta_s dv \, ds$

Could anyone explain how you get the right equation? What is the ‘change of variable’ that is applied? Thanks!! Also $ t \leq T$ is given.

## Upper bound on the number of edges of a ten vertexes graph no $C_4, C_3$ subgraph.

We are given a graph $ G$ with ten vertices. Any three or four vertices of $ G$ don’t form a cycle. What is the maximum number of edges $ G$ could have?

## Lower bound on $1^k+2^k+\dots+n^k$

I calculated the worst case scenario of a time complexity of an algorithm problem using recurrence tree. (The problem cannot be solved by master theorem.)

Now I want to find a lower bound on the expression I got. Specifically, I want to prove that

$ $ 1^k + 2^k + \dots + n^k = \Omega(n^{k+1}). $ $

## Proof of the lower bound

I have successfully proved the upper bound using a recurrence tree. But I am really stuck of proving the Big Omega. Please help, thanks.

## Gaussian complexity bound

I am reading Foundations of Machine Learning (1st edition). It seems that most generalization bounds in the literature are based on Rademacher complexity, rather than Gaussian complexity. So, I was wondering if the same bounds still hold for Gaussian complexity. For example, does Theorem 3.1 in the book still hold for Gaussian complexity? From the proof, it seems that it still holds for Gaussian complexity without changing anything.

I have read the paper Rademacher and Gaussian Complexities: Risk Bounds and Structural Results, and the Lemma 4 there indicates that Rademacher complexity can be upper bounded by Gaussian complexity times a constant. I was wondering, what is this constant? Is it data dependent?

## How to find the upper bound of the following formula?

Let $ x = \frac{1}{n}\sum_{i=1}^{n}x_i$ be the centroid of the set of vectors $ x_i\in \mathbb{R}^m$ . Given the $ 0 \leq \gamma_i \leq 1$ is real number, define $ $ \alpha = \frac{1}{n}\left(\sum_{i=1}^n\gamma_i+\frac{1}{2}\left(a+\sqrt{a^2+4l^Tl}\right)\right) $ $ with $ \beta = \sqrt{\frac{1}{n}\left(\sum_{j=1}^n(x-x_j)^T(x-x_j)\right)}$ , $ b_i = \frac{1}{\beta}\left(x-x_i\right)$ , $ a = \sum_{i=1}^{n}\gamma_i\left(b_i^Tb_i-1\right)$ and $ l=\sum_{i=1}^{n}\gamma_ib_i$ .

We can observe that when all $ \gamma_i$ equal to 1, the first term of $ \alpha$ becomes one and the second becomes zero. I can also find a weak upper bound. $ $ \epsilon=\max_i b_i^Tb_i\ \gamma=\frac{1}{n}\sum_{i=1}^n\gamma_i\ a\leq n(n\epsilon-1)\gamma\ l^Tl\leq \left(\sum_{i=1}^n\gamma_i\left\|b_i\right\| \right)^2 \leq n^2\gamma^2\epsilon\ \alpha\leq\gamma+n\gamma\epsilon $ $

But by numerical simulation I saw that $ \alpha$ may hardly exceed 1. So I have two questions:

- Can we prove $ \alpha \leq 1$ ?
- If we cannot, can we find a stronger upper bound of $ \alpha$ ?

## Subtracting minimums bound

Consider $ |\min f-\min g|$ . Can we bound this in terms of $ \min f-g$ or $ \min |f-g|$ ?

I know that $ \min f+\min g\leq \min f+g$ and I know that $ \min g=-\max (-g)$ but I’m not sure how to proceed.

## Lower bound of q pochhammer symbol

How one could prove, that q pochhammer symbol $ (1,1/n) = \prod_{k = 1}^{\infty}(1-\frac{1}{n^k}) \geq 1 – \frac{1}{n-1}$