Is there an upper limit to the damage that Eldritch Smite can deal?

The Eldritch Invocation "Eldritch Smite" says the following:

Eldritch Smite

Prerequisite: 5th level, Pact of the Blade feature

Once per turn when you hit a creature with your pact weapon, you can expend a warlock spell slot to deal an extra 1d8 force damage to the target, plus another 1d8 per level of the spell slot, and you can knock the target prone if it is Huge or smaller.

Unlike a paladin’s Divine Smite, which says "to a maximum of 5d8", Eldritch Smite says no such thing, so does that mean that a Warlock 5/Sorcerer 15 can use an 8th level spell slot to deal 9d8 damage using Eldritch Smite?

Related: What is the damage dealt by Eldritch Smite?

Tight upper bound for forming an $n$ element Red-Black Tree from scratch

I learnt that in a order-statistic tree (augmented Red-Black Tree, in which each node $ x$ contains an extra field denoting the number of nodes in the sub-tree rooted at $ x$ ) finding the $ i$ th order statistics can be done in $ O(lg(n))$ time in the worst case. Now in case of an array representing the dynamic set of elements finding the $ i$ th order statistic can be achieved in the $ O(n)$ time in the worst case.[ where $ n$ is the number of elements].

Now I felt like finding a tight upper bound for forming an $ n$ element Red-Black Tree so that I could comment about which alternative is better : "maintain the set elements in an array and perform query in $ O(n)$ time" or "maintaining the elements in a Red-Black Tree (formation of which takes $ O(f(n))$ time say) and then perform query in $ O(lg(n))$ time".

So a very rough analysis is as follows, inserting an element into an $ n$ element Red-Black Tree takes $ O(lg(n))$ time and there are $ n$ elements to insert , so it takes $ O(nlg(n))$ time. Now this analysis is quite loose as when there are only few elements in the Red-Black tree the height is quite less and so is the time to insert in the tree.

I tried to attempt a detailed analysis as follows (but failed however):

Let while trying to insert the $ j=i+1$ th element the height of the tree is atmost $ 2.lg(i+1)+1$ . For an appropriate $ c$ , the total running time,

$ $ T(n)\leq \sum_{j=1}^{n}c.(2.lg(i+1)+1)$ $

$ $ =c.\sum_{i=0}^{n-1}(2.lg(i+1)+1)$ $

$ $ =c.\left[\sum_{i=0}^{n-1}2.lg(i+1)+\sum_{i=0}^{n-1}1\right]$ $

$ $ =2c\sum_{i=0}^{n-1}lg(i+1)+cn\tag1$ $


$ $ \sum_{i=0}^{n-1}lg(i+1)=lg(1)+lg(2)+lg(3)+…+lg(n)=lg(1.2.3….n)\tag2$ $

Now $ $ \prod_{k=1}^{n}k\leq n^n, \text{which is a very loose upper bound}\tag 3$ $

Using $ (3)$ in $ (2)$ and substituting the result in $ (1)$ we have $ T(n)=O(nlg(n))$ which is the same as the rough analysis…

Can I do anything better than $ (3)$ ?

All the nodes referred to are the internal nodes in the Red-Black Tree.

Finding the upper bound of states in Minimal Deterministic Finite Automata

I have a task to determine the upper bound of states in the Minimal Deterministic Finite Automata that recognizes the language: $ L(A_1) \backslash L(A_2) $ , where $ A_1 $ is a Deterministic Finite Automata(DFA) with $ n$ states and $ A_2$ is Non-deterministic Finite Automata(NFA) with $ m$ states.

The way I am trying to solve the problem:

  1. $ L(A_1) \backslash L(A_2) = L(A_1) \cap L(\Sigma^* \backslash L(A_2)$ , which is language, that is recognised by automata $ L’$ with $ n*m$ states
  2. Determinization of $ L’$ which has $ (n*m)^2$ states and it is the upper bound of states.

Am I right?

Mathematics of the upper bound for the encoded input of an instance Kruskal’s MWST problem

I was going through the classic text “Introduction to Automata Theory, Languages, and Computation” by Hofcroft, Ullman and Motwani where I came across the encoding of an instance of Kruskal’s Mininum Weight Spanning Tree ($ MWST$ ) problem so that it could be given as input to a Turing Machine. The encoding is as shown.

The Kruskal’s problem could be couched as: “Given this graph $ G$ and limit $ W$ , does $ G$ have a spanning tree of weight $ W$ or less?”


Let us consider a possible code for the graphs and weight limits that could be the input to the $ MWST$ problem. The code has five symbols, $ 0$ , $ 1$ , the left and right parentheses, and the comma.

  1. Assign integers $ 1$ through $ m$ to the nodes.

  2. Begin the code with the value of $ m$ in binary and the weight limit $ W$ in binary, separated by a comma.

  3. If there is an edge between nodes $ i$ and $ j$ with weight $ w$ , place $ (i, j, w)$ in the code. The integers $ i$ , $ j$ , and $ w$ are coded in binary. The order of $ i$ and $ j$ within an edge, and the order of the edges within the code are immaterial. Thus, one of the possible codes for the graph of Fig. with limit $ W = 40$ is

$ 100, 101000(1, 10, 1111)(1, 11, 1010)(10, 11, 1100)(10, 100, 10100)(11, 100, 10010)$

If we represent inputs to the $ MWST$ problem above, then an input of length $ n$ can represent at most $ O(n/\log n)$ edges. It is possible that $ m$ , the number of nodes, could be exponential in $ n$ , if there are very few edges. However, unless the number of edges, $ e$ , is at least $ m-1$ the graph cannot be connected and therefore will have no $ MWST$ , regardless of its edges. Consequently, if the number of nodes is not at least some fraction of $ n/\log n$ , there is no need to run Kruskal’s algorithm at all we simply say “no there is no spanning tree of that weight.”

I do not understand the mathematics behind the lines which are given in bold. How are they bounding the number of edges by dividing the total input size $ n$ by $ \log n$ . If the number of edges are less then how can the number of node $ m$ be exponential in $ n$

Upper Bound on Local Search MST

Consider a connected weighted graph $ G=(V,E)$ . The local search algorithm is used to find the Minimum Spanning Tree, only swapping one edge for another at a time. At each step of the local search algorithm, select the best improving swapping of edges: $ \{e, f\}$ , where $ w_f – w_e$ is maximized.

Here is how the algorithm works:

Define a spanning tree $ T$ in $ G$ . Let $ e$ be an edge in $ G \setminus T$ , and define $ C$ as the cycle created when $ e$ is added to $ T$ . If $ w_e < w_f$ , where $ f$ is the edge in $ C \setminus \{e\}$ of largest weight, then $ \{e, f\}$ is defined as an improving swap and we set $ T := (T \setminus f) ∪ e$ . We repeat this until there are no more improving swaps.

There exists a finite number of spanning trees in a graph $ G$ . So, when using the best improving swap local search algorithm, there must be an eventual termination of the algorithm in a finite number of steps since the weight of the spanning tree falls in each step.

What I need is an upper bound on the max number of steps the best improving swap local search algorithm will take. Assume every edge weight in the $ G$ is distinct.

I’m really stuck and I’m not sure how to even start this problem, any help would be appreciated.

The total length of input to a pushdown automata which accepts by empty stack is an upper bound on the number states and stack symbols

I was going through the classic text “Introduction to Automata Theory, Languages, and Computation” (3rd Edition) by Jeffrey Ullman ,John Hopcroft, Rajeev Motwani, where I came across few statements about a pushdown automata (PDA) which accepts by empty stack, as:

1. $ n$ , the total length of the input, is surely an upper bound on the number of states and stack symbols.

2. One rule could place almost n symbols on the stack.

The following statements were made while the authors were about to make some notes about the decision properties of CFLs(Context Free Languages)

Now here are some points by which I am possibly able to contradict the claim rather than proving it correct.

  1. Suppose $ n$ , is the total length of the input, but as per the design of the PDA it might so happen that to accept the input string all the states of the PDA is not involved, so by this we can’t say that $ n$ is an upper bound on the number of states the PDA has.

  2. Though the PDA accepts by empty stack, it might so happen that a transition function adds more than $ n$ elements on the top of the stack, but at the end on consuming the $ n$ input symbols we can stay on the particular state and use epsilon transitions to just remain in the same state and pop the elements from the stack till it becomes empty. So how can we say that $ n$ is an upper bound on the number elements on the stack? We arrive at a contradiction…

I don’t understand where I am making the mistake, because the same statements are written in the 3rd edition of the book without any changes being made from the second edition which makes it probable that the statement is correct.

I have attached the corresponding portion of the text below: enter image description here

Find the upper bound of the recurrence T(n) = T(n – 4) + n with n is odd

I am trying to solve this recurrence assuming n is odd: $ T(n) = T(n – 4) + \Theta n$

What I did so far was:

First, $ T(n – 4) = T(n – 8) + (n – 4) $ , thus we get $ T(n) = T(n – 8) + (n – 4) + n$

Next, $ T(n – 8) = T(n – 12) + (n – 8)$ , thus we get $ T(n) = T(n – 12) + (n – 8) + (n – 4) + n$

Next, $ T(n – 12) = T(n – 16) + (n – 12)$ , thus we get $ T(n) = T(n – 16) + (n – 12) + (n – 8) + (n – 4) + n$

So know I am seeing that where this is going, I wrote the general form:

$ T(n) = T(n – 4k) + [n – 4(k – 1)] + [n – 4(k – 2)] + … + (n – 4) + n$

As mentioned, we have n is odd so I write it as $ n = 2k + 1$ , in which we get $ k = \frac{n – 1}{2}$ . So I will plug that in to get:

$ T(n) = T(-n + 2) + (-n + 6) + (-n + 10) + … + (n – 4) + n$

This is where I am stuck. I suspect that the $ (n – 4) + n$ must have gone wrong – but I am not too sure but I decided to ask you guys.

Thank you so much!

Upper bound on the average number of overlaps for an interval within a set of intervals

Let $ \mathcal{I}$ be a set of intervals with cardinality $ L$ , where each interval $ I_i \in \mathcal{I}$ is of the form $ [a_i, b_i]$ and $ a_i, b_i$ are pairwise distinct non-negative integers bounded by a constant $ C$ i.e. $ 0 \leq a_i < b_i \leq C$ . We say a pair of intervals $ I_i, I_j \in \mathcal{I}$ overlap if the length of overlap is $ > 0$ .

Define a function $ F(i)$ which computes the number of intervals in $ \mathcal{I} \backslash I_i$ that interval $ I_i$ overlaps with. \begin{equation} F(i) = \sum_{j=1, j \neq i}^{L} Overlap(I_i, I_j) \end{equation} where the function $ Overlap(I_i, I_j)$ is an indicator function which returns 1 if $ I_i, I_j$ overlap, else it returns 0.

The average number of overlaps for the intervals in $ \mathcal{I}$ , denoted by $ Avg(\mathcal{I})$ is given by $ Avg(\mathcal{I}) = \dfrac{\sum_{i=1}^{L}F(i)}{L}$ .

The question is, suppose we are allowed to choose the intervals in the set $ \mathcal{I}$ with the following additional conditions:

  1. For any $ t \in [0, C]$ , we have at most $ M$ (and $ M < L$ ) intervals in $ \mathcal{I}$ such that $ t$ is contained within those $ M$ intervals. Stated differently, at most $ M$ intervals overlap at any point in time.
  2. Any interval in $ \mathcal{I}$ overlaps with at most $ K$ other intervals, and $ M < K < L$ .

then, what is an upper bound on $ Avg(\mathcal{I})$ for any choice of the intervals in $ \mathcal{I}$ satisfying 1, 2?

In case you are wondering, this problem is of interest to me in order to be able to study the run-time of a scheduling algorithm.

I am unable to come up with a non-trivial upper bound for $ Avg(\mathcal{I})$ and would be interested to know if the problem I stated has been studied. I am also open to ideas on how one may be able to obtain a tight upper bound for $ Avg(\mathcal{I})$ .