Proof of a lower bound of the recurrence relation (the CLRS’s 4.6-2 exercise)

I am trying to find a solution to the ex. 4.6-2 of the Introduction to Algorithms by Cormen, Leiserson, Rivest, Stein (the third edition). It requires, for recurrence relations $ T(n)=aT(n/b)+f(n)$ where $ a\geq 1, b > 1$ , $ n$ is an exact power of $ b$ and $ f(n)$ is an asymptotically positive function, to prove that if $ f(n) = \Theta(n^{\log_ba}lg^{k}n)$ , where $ k\geq0$ , then $ T(n)=\Theta(n^{\log_ba}lg^{k+1}n)$ .

Since $ T(n) = n^{\log_ba} + g(n)$ where $ g(n) = \sum_{j=0}^{\log_b n – 1} a^{j}f(n/b^{j})$ , I decided to consider the $ g(n)$ function at the first. And I have shown $ g(n) = O(n^{\log_ba}lg^{k+1}n)$ already (I think so).

But the doing the proof $ g(n) = \Omega(n^{\log_ba}lg^{k+1}n)$ became a challenge for me.

Below is my research (with the simplified assumption $ k$ is an integer). By condtition, there is such constant $ c$ that

$ g(n) \geq$

$ c \sum_{j=0}^{\log_b n – 1} a^{j}(n/b^{j})^{log_ba}log^{k}(n/b^{j}) =$

$ cn^{\log_ba}\sum_{j=0}^{\log_b n – 1} log^{k}(n/b^{j}) =$

$ cn^{\log_ba}\sum_{j=0}^{\log_b n – 1}(logn – logb^{j})^{k} =$

$ cn^{\log_ba}\sum_{j=0}^{\log_b n – 1}\sum_{i=0}^{k} {k \choose i}log^{k-i}n(-logb^{j})^{i} =$

$ cn^{\log_ba}log^{k}n\sum_{j=0}^{\log_b n – 1}\sum_{i=0}^{k} {k \choose i}(-logb^{j}/logn)^{i} =$

$ cn^{\log_ba}log^{k}n \biggl(log_bn + \sum_{j=0}^{\log_b n – 1}\sum_{i=1}^{k} {k \choose i}(-logb^{j}/logn)^{i} \biggr) \geq$

$ c’n^{\log_ba}log^{k+1}n – cn^{\log_ba}log^{k}n\sum_{j=0}^{\log_b n – 1}\sum_{i=1}^{k} {k \choose i}(logb^{j}/logn)^{i} =$

$ A(n) – B(n) = \Theta(n^{\log_ba}lg^{k+1}n) – B(n)$

Actually I am stuck with it. I can not show that $ B(n)$ grows slower than $ A(n)$ . For instance, since $ (logb^{j}/logn)^{i} \lt 1$ we are able to enhance our $ \geq$ condition by the substitution $ B(n)$ to some fucntion $ B'(n)$ with the sums of binominal coefficients only. But then finally $ B'(n)$ has $ n^{\log_ba}log^{k+1}n$ .

So how to prove $ g(n) = \Omega(n^{\log_ba}lg^{k+1}n)$ ?

Bound on the mutual information between a product of correlated random variables

Let $ G$ be a finite group.

Suppose the random variables $ X_1,\dots,X_N$ are sampled uniformly at random from $ G$ . Let $ Y_1,\dots,Y_N$ be random variables where $ Y_i$ is correlated with $ X_i$ and sampled according to some unknown distribution.

Given a bound on the mutual information $ I(X_k:Y_k) \leq \epsilon_k$ for all $ k$ , what is a good upper bound on $ I(X_1\dots X_N: Y_1\dots Y_N)$ ? I.e., the product of the two sets of random variables.

I believe a bound like $ $ I(X_1\dots X_N: Y_1\dots Y_N) \leq C\prod\epsilon_k$ $ for some $ C$ might exist but have had no luck in proving it.

Anti-concentration: upper bound for $P(\sup_{a \in \mathbb S_{n-1}}\sum_{i=1}^na_i^2Z_i^2 \ge \epsilon)$

Let $ \mathbb S_{n-1}$ be the unit sphere in $ \mathbb R^n$ and $ z_1,\ldots,z_n$ be a i.i.d sample from $ \mathcal N(0, 1)$ .

Question

Given $ \epsilon > 0$ (may be assumed to be very small), what is a reasonable upper bound for the tail probability $ P(\sup_{a \in \mathbb S_{n-1}}\sum_{i=1}^na_i^2z_i^2 \ge \epsilon)$ ?

Observations

  • Using ideas from this other answer (MO link), one can establish the non-uniform anti-concentration bound: $ P(\sum_{i=1}^na_i^2z_i^2 \le \epsilon) \le \sqrt{e\epsilon}$ for all $ a \in \mathbb S_{n-1}$ .

  • The uniform analogue is another story. May be one can use covering numbers ?

How to repeat bound controls based on clicking Add new in Asp.net mvc and Knockoutjs

I want to repeat a dropdownlist that is already bound using a Viewbag property and another textbox when a user click on Add Course. I have used asp.net mvc and knockout.js based on a tutorial i saw, but the tutorial does not exactly handle using bound controls, please how can i achieve this using asp.net mvc and knockout.js. Below is my code. Thanks

<table id="jobsplits">      <thead>         <tr>            <th>@Html.DisplayNameFor(m => m.selectedCourse.FirstOrDefault().FK_CourseId)</th>            <th>@Html.DisplayNameFor(m => m.selectedCourse.FirstOrDefault().CourseUnit)</th>            <th></th>         </tr>      </thead>      <tbody data-bind="foreach: courses">            @for (int i = 0; i < Model.selectedCourse.Count; i++)            {               <tr>                   <td>                        @Html.DropDownListFor(model => model.selectedCourse[i].FK_CourseId, new SelectList(ViewBag.Course, "Value", "Text", Model.FK_CourseId), "Select Course", new { @class = "form-control", data_bind = "value: courses" })                   </td>                   <td>                        @Html.TextBoxFor(model => model.selectedCourse[i].CourseUnit, new { htmlAttributes = new { @class = "form-control", @readonly = "readonly", data_bind = "value: courseUnit" } })                   </td>                   <td>                        <button type="button" data-bind="click: $  root.removeCourse" class="btn delete">Delete</button>                   </td>               </tr>            }      </tbody> </table>  <div class="col-md-4">      <button data-bind="click: addCourse" type="button" class="btn">Add Course</button> </div> 

This is the script section

@section Scripts{ @Scripts.Render("~/bundles/knockout") <script>     function CourseAdd(course, courseUnit) {         var self = this;          self.course = course;         self.courseUnit = courseUnit;     }      function CourseRegViewModel() {         var self = this;          self.addCourse = function () {             self.courses.push(new CourseAdd(self.course, self.courseUnit));         }          self.courses = ko.observableArray([             new CourseAdd(self.course, self.courseUnit)         ]);          self.removeCourse = function (course) {             self.courses.remove(course)         }                 }      ko.applyBindings(new CourseRegViewModel()); </script> } 

Lower bound on the nonzero Laplacian eigenvalue with the smallest real part

Consider a directed graph with $ n$ vertices. The graph is not assumed to be connected, and therefore the multiplicity of the eigenvalue 0 may be greater than 1. I am looking for a nonzero lower bound on the nonzero Laplacian eigenvalue with the smallest real part. The bound need not be very tight, but it must be a function of network size ($ n$ )

Dynamic Perfect Hashing and Lower Bound

I am writing a Seminar about dynamic perfect hashing and its lower bound by the FKS schema using the the adversary method mentioned here by using a Tree data structure. But somehow i don t get how the tree is build by the algorithm. What i mean exactly that he algorithm only generates 2 perfect hash functions but according the adversary strategy generates like many perfect hash functions at each node and leaves are the items inserted by the hash function.

Can someone please help understand how the data structure is build while running the algo. thnx

Proper Way To Compute An Upper Bound

I regard to the proof of Lemma 10 in “A remark on a conjecture of Chowla” by M. R. Murty, A. Vatwani, J. Ramanujan Math. Soc., 33, No. 2, 2018, 111-123,

the authors used the average value $ (\log x)^c$ , $ c$ constant, of the number of divisors function $ \tau(d)=\sum_{d|n}1$ as an upper bound for $ \tau(d)^2$ , where $ d \leq x$ . To be specific, they claim that $ $ \sum_{q \leq x^{2\delta}}\tau(q)^2 \left | \sum_{\substack{m \leq x+2\ m \equiv a \bmod q}} \mu(m)\right | \ll x (\log x)^{2c},$ $

where $ 2 \delta <1/2$ .

The questions are these:

  1. Is the main result invalid? The upper bound should be $ $ \sum_{q \leq x^{2\delta}}\tau(q)^2 \left | \sum_{\substack{m \leq x+2\ m \equiv a \bmod q}} \mu(m)\right | \ll x ^{1+2\delta}.$ $ This is the best unconditional upper bound, under any known result, including Proposition 3.

  2. It is true that the proper upper bound $ \tau(d)^2 \ll x^{2\epsilon}$ , $ \epsilon >0$ , is not required here?

  3. Can we use this as a precedent to prove other upper bounds in mathematics?

Upper bound for two expectation

$ f : \mathbb R^n \rightarrow \mathbb R$ . Is there a good upper bound for the following difference? \begin{equation*} \big| \mathbb E_{(x_1, \ldots, x_n) \sim \nu} f(x_1, \ldots, x_n) – \mathbb E_{(x_1, \ldots, x_n) \sim \mu^n} f(x_1, \ldots, x_n) \big| \end{equation*} the marginal probability distribution of $ \nu$ is $ \mu$

How did they come up with the MRRW bound?

Among the good asymptotic bounds in coding theory in the MRRW bound. It is obtained by using the linear programming problem of Delsarte’s and providing a solution. The LP problem is

Suppose $ C \subset \mathbb{F}_2^n $ is a code such $ d(C)\ge d$ . Let $ \beta(x) = 1+ \sum_{k=1}^{n} y_k K_k (x)$ be a polynomial such that $ y_k \ge 0$ but $ \beta(j) \le 0$ for $ j=d, d+1,\dots ,n$ . Then, we have that $ |C| \le \beta(0)$ .

Here $ K_k(x)$ are the Kravchuk polynomials. In the proof of the MRRW bound, upto scaling, they basically come up with the following polynomial $ \beta$ for a general $ n$ .

$ $ \beta(x) =\frac{1}{x-a} \left[ K_t(a) K_{t+1}(x) – K_{t+1}(a)K_{t}(x) \right]^{2}$ $

After using the Christoffel-Darboux formula the values of $ t$ and $ a$ are adjusted to make it optimal.

There is no justification for why such a polynomial was chosen other than that it works. Is there anything more that can be said over why this polynomial was chosen?