## Tight upper bound for forming an $n$ element Red-Black Tree from scratch

I learnt that in a order-statistic tree (augmented Red-Black Tree, in which each node $$x$$ contains an extra field denoting the number of nodes in the sub-tree rooted at $$x$$) finding the $$i$$ th order statistics can be done in $$O(lg(n))$$ time in the worst case. Now in case of an array representing the dynamic set of elements finding the $$i$$ th order statistic can be achieved in the $$O(n)$$ time in the worst case.[ where $$n$$ is the number of elements].

Now I felt like finding a tight upper bound for forming an $$n$$ element Red-Black Tree so that I could comment about which alternative is better : "maintain the set elements in an array and perform query in $$O(n)$$ time" or "maintaining the elements in a Red-Black Tree (formation of which takes $$O(f(n))$$ time say) and then perform query in $$O(lg(n))$$ time".

So a very rough analysis is as follows, inserting an element into an $$n$$ element Red-Black Tree takes $$O(lg(n))$$ time and there are $$n$$ elements to insert , so it takes $$O(nlg(n))$$ time. Now this analysis is quite loose as when there are only few elements in the Red-Black tree the height is quite less and so is the time to insert in the tree.

I tried to attempt a detailed analysis as follows (but failed however):

Let while trying to insert the $$j=i+1$$ th element the height of the tree is atmost $$2.lg(i+1)+1$$. For an appropriate $$c$$, the total running time,

$$T(n)\leq \sum_{j=1}^{n}c.(2.lg(i+1)+1)$$

$$=c.\sum_{i=0}^{n-1}(2.lg(i+1)+1)$$

$$=c.\left[\sum_{i=0}^{n-1}2.lg(i+1)+\sum_{i=0}^{n-1}1\right]$$

$$=2c\sum_{i=0}^{n-1}lg(i+1)+cn\tag1$$

Now

$$\sum_{i=0}^{n-1}lg(i+1)=lg(1)+lg(2)+lg(3)+…+lg(n)=lg(1.2.3….n)\tag2$$

Now $$\prod_{k=1}^{n}k\leq n^n, \text{which is a very loose upper bound}\tag 3$$

Using $$(3)$$ in $$(2)$$ and substituting the result in $$(1)$$ we have $$T(n)=O(nlg(n))$$ which is the same as the rough analysis…

Can I do anything better than $$(3)$$?

All the nodes referred to are the internal nodes in the Red-Black Tree.

## Prove that the 2-approximation of a modified local search algorithm for max-cut is tight

Consider the following local search approximation algorithm for the unweighted max cut problem: start with an arbitrary partition of the vertices of the given graph $$G = (V,E)$$, and as long as you can move 1 or 2 vertices from one side of the partition to the other, or switch between 2 vertices on opposite sides of the partition, in a way that improves the cut, do so.

I know that this algorithm is a 2-approximation algorithm. I want to prove that this approximation is tight. That is, the algorithm is not an $$\alpha$$-approximation algorithm for any $$\alpha < 2$$ .

I found an example for the tightness of the “regular” local search approximation algorithm of max-cut, where at each iteration, you can move only 1 vertex from one side of the partition to the other, and can’t switch between two vertices on opposite sides. The example is of the complete bipartite graph $$K_{2n,2n}$$. If the initial cut includes $$n$$ vertices from each side of the graph in each side of the partition, the cut will include $$2n^2$$ edges, while the optimal cut will include all of them. The “regular” algorithm will not be able to improve the cut from this initial position.

However, this example doesn’t work for the algorithm described on top, because we can improve the cut by switching between two vertices from opposite sides of the partition.

Can someone give me an example or a clue of an example that can prove that the approximation is tight? Thanks.

## Tight analysis for the ration of $1-\frac{1}{e}$ in the unweighted maximum coverage problem

The unweighted maximum coverage problem is defined as follows:

Instance: A set $$E = \{e_1,…,e_n\}$$ and $$m$$ subsets of $$E$$, $$S = \{S_1,…,S_m\}$$.

Objective: find a subset $$S’ \subseteq S$$ such that $$|S’| = k$$ and the number of covered elements is maximized.

The problem is NP-hard, but a simple greedy algorithm (at each stage, choose a set which contains the largest number of uncovered elements) achieves an approximation ratio of $$1-\frac{1}{e}$$.

In the following post, there is an example of when the greedy algorithm fails.

Tight instance for unweighted maximum coverage problem?

I wish to prove that the approximation ration for the greedy algorithm is tight. That is, the greedy algorithm is not an $$\alpha-$$approximation ratio for any $$\alpha > 1-\frac{1}{e}$$.

I think that if I will find, for any $$k$$, (or for an ascending series of $$k’s$$), an instance where the number of elements covered by greedy algorithm is $$1-(1- \frac{1}{k})^k$$ times the number of elements covered by the optimal solution, the tightness of the ratio will be proved.

Can someone give a clue for such instances?

I thought of an initial idea: let $$E = \{ a_1 ,…a_n,b_1,…,b_n,…,k_1,…,k_n\}$$, a set with $$n\cdot k$$ elements. Let $$S$$ include $$k$$ sets of $$n$$ elements each, $$A = \{ a_1 ,…a_n\},…,K= \{k_1,…,k_n\}$$. The optimal solution will select these $$k$$ sets and cover all the elements in $$E$$. Now I want to add $$k$$ sets to $$S$$, that will be the solution the greedy algorithm will find, and will cover $$1-(1- \frac{1}{k})^k$$ of the elements in $$E$$. The first such set, of size $$n$$: $$S_1 = \{a_1,…a_\frac{n}{k},b_1,…b_\frac{n}{k},…,k_1,…k_\frac{n}{k} \}$$ ($$\frac{n}{k}$$ elements from each of the first $$k$$ sets). The second such set, of size $$n – \frac{n}{k}$$: $$S_2 = \{a_\frac{n}{k},…a_{\frac{n}{k}+ (n – \frac{n}{k})\cdot\frac{1}{k}},b_\frac{n}{k},…,b_{\frac{n}{k}+ (n – \frac{n}{k})\cdot\frac{1}{k}},…,k_\frac{n}{k},…,k_{\frac{n}{k}+ (n – \frac{n}{k})\cdot\frac{1}{k}} \}$$ , (that is, $$(n – \frac{n}{k})\cdot\frac{1}{k}$$ elements from each of the first $$k$$ sets) and so on till we have $$k$$ additional such sets.

I don’t think this idea works for every $$k$$ and $$n$$, and I’m not sure it’s the right approach.

Thanks.

## EZ Extra Tight Round Liner Tattoo Needles Cartridge

Wujiang City Shen Ling Medical Device Co., Ltd. (We have another company, Named: Wujiang City Cloud & Dragon Medical Device Co.,Ltd.) is located in Beishe town, Wujiang City, very close to Shanghai, only 1.5 hours’ drive from Shanghai to our company.
We are the largest professional acupuncture needle and Tattoo needle manufacturer in China, which specialized in production of acupuncture needles and tattoo needles for single use. About 15 years ago we had got FDA registration, CE certification, ISO，TGA registration.
Our company are established in 1992, as production history over tens of years. We have above 300 employees, and factory acreage above 30000 Square meters, We produce acupuncture needle 1,000,000pcs each day and tattoo needles 100,000pcs each day. Our sales amount total above 6 million U.S. Dollars annually.
Welcome to visit our factor, we can accept all kinds of orders of your OEM packs and designs! We are making OEM packs for the most famous brands all of the world market for above 15 years!
We are the top grade quality manufacturer in the China market,  we have standard quality and premium precision quality needles for choose, We also make Disposable Plastic Grips, Tips, and we also distribute Tattoo machines and other Tattoo supplies.
Cartridge Tattoo Needles
Make easy Tattooing…
Tattoo cartridge needles are the revolutionary needles created after the traditional premade tattoo needles.  The Cheyenne Hawk cartridge needles is the revolutionary leader of this kind of needles. Set up needle cartridges is usually faster and easier than setting up a traditional needle, so we also call them “easy tattoo cartridges”.
We now manufacture 2 different top quality cartridge needles which we are selling over 10000 boxes each month to wholesalers. Our own brand is “OPPSITE”, we call it “O” cartridge for short, We also accept the big orders in different color and OEM packs.
We make the Microblading Cosmetic Tattoo Needles for above 10 years, start from a famous USA Cosmetic Beauty customer, till now, we have above about 20 different styles for microblading. Mostly ordered by the customers, we accept the draw designs of the needles, we can make all kinds of shapes and needles combinations as the customers designed.
Scalp Micropigmentation Needles
This is the new territory of tattoo, for help people who lose hair alopecic looks more furry, This needle is derived varieties from traditional tattoo needles, and now we can do it use the cartridge machines, it make more easy to handle.
Cartridge Tattoo Machines
As we have lots of customers buying our cartridge needles, they want a perfect fit machine for our needles too, this is the one we pick out from the China market, the best quality machine in the China market. Steady working and less heat.EZ Extra Tight Round Liner Tattoo Needles Cartridge
website:http://www.tattoo-cartridge.com/
website2:http://www.cartridge-needles.com/

## Cuckoo hashing with a stash: how tight are the bounds on the failure probability?

I was reading this very good summary of Cuckoo hashing.

It includes a result (page 5) that:

A stash of constant sizes reduces the probability of any failure to fall from $$\Theta(1/n)$$ to $$\Theta(1/n^{s+1})$$ for the case of $$d= 2$$ choices

It references the paper KMW08. But KMW08 only has the result (Theorem 2.1) that:

For every constant integer $$s \geq 1$$, for a sufficiently large constant $$\alpha$$, the size $$S$$ of the stash after all items have been inserted satisfies $$Pr(S \geq s) =O(n^{-s})$$.

Note that the $$s$$ in the different theorems is slightly different, in the first if the stash is of size $$s$$, it is not a failure, in the second, if the stash is of size $$s$$ it is a failure. This is why the first has $$s+1$$ and the second has $$s$$.

The difference between the two is then that the first uses theta-notation, whereas the second uses big-O notation. So my questions:

• Do we know that the failure probability is $$\Omega(n^{-(s+1)})$$?
• If so, do we know the constants in the $$\Theta(n^{-(s+1)})$$ expression?

And if so, which papers presented these results?

## Tight coupling between parent and children: always to be avoided?

Say we consider two inherently coupled elements, using a real-life like example:

• Body
• PhysicalIllness

Note: the following code is pseudo-Java for the sole purpose of syntax-coloring, pure syntax doesn’t really matter here.

Body is a class that holds several basic properties, such as stamina, focus, etc., plus an array of PhysicalIllness representing all the illnesses it has contracted.

class Body {    int focus    int stamina     PhysicalIllness[] physicalIllnesses } 

PhysicalIllness is an abstract class, extended by concrete illnesses. Everything they do/react to depend on their host body. They are born inside a body, they “live” within it and their existence doesn’t mean anything outside of a body.

## Questions

In such a scenario, wouldn’t having a Body instance injected into PhysicalIllness‘s constructor, and stored as a (say, host_body) reference used throughout the life of the illness, be fine? The illness could then respond to life events (say sleeped, hour_elapsed) on its own and impact its host body accordingly:

abstract class PhysicalIllness {    Body hostBody     PhysicalIllness(Body hostBody) {      this.hostBody = hostBody    }     void onAcquired() {}    void onHourElapsed() {}    void onBodySleeped() {}    void onGone() {} }  
class Headache extends PhysicalIllness {   void onAcquired() {     this.hostBody.focus -= 10   }   void onHourElapsed() {     this.hostBody.focus += 2   }   // ... } 

Tight coupling actually seems natural to me here. However, it does produce a cyclic/circular dependency, as Body holds references to PhysicalIllness instances and PhysicalIllness also holds a reference to its “parent”, its host body.

Could you people point to any downside of designing things this way, in terms of code maintenance/unit-testing, flexibility, or anything else? I realize there are other answers about this, but since every scenario is different, I’m still unsure if they apply here as well.

## Alternative (without circular dependency)

One alternative would obviously be to remove the coupling by having PhysicalIllness instances be notified of every event by the body (which would pass itself as argument in the process). This requires every method of PhysicalIllness to have a Body parameter:

abstract class Illness {    void onAcquired(Body hostBody) {}    void onHourElapsed(Body hostBody) {}    // ... }  
class Headache extends Illness {   void onAcquired(Body hostBody) {     hostBody.focus -= 10   }   void onHourElapsed(Body hostBody) {     hostBody.focus += 2   }   // ... } 
class Body {   // ...    void onHourElapsed() {     for (PhysicalIllness illness in this.physicalIllnesses) {       illness.onHourElapsed(this);     }   }    // ... } 

I feel like this is clunky and actually less logical, because it means a physical illness can exist outside of a body (you can construct one without a host body), and therefore all methods require the “obvious” host_body parameter.

If I had to summarize this post with one single question: should tight coupling and/or circular dependency between parent/children components be avoided in all situations?

## Tight coupling between parent and children: always to be avoided?

Say we consider two inherently coupled elements, using a real-life like example:

• Body
• PhysicalIllness

Note: the following code is pseudo-Java for the sole purpose of syntax-coloring, pure syntax doesn’t really matter here.

Body is a class that holds several basic properties, such as stamina, focus, etc., plus an array of PhysicalIllness representing all the illnesses it has contracted.

class Body {    int focus    int stamina     PhysicalIllness[] physicalIllnesses } 

PhysicalIllness is an abstract class, extended by concrete illnesses. Everything they do/react to depend on their host body. They are born inside a body, they “live” within it and their existence doesn’t mean anything outside of a body.

## Questions

In such a scenario, wouldn’t having a Body instance injected into PhysicalIllness‘s constructor, and stored as a (say, host_body) reference used throughout the life of the illness, be fine? The illness could then respond to life events (say sleeped, hour_elapsed) on its own and impact its host body accordingly:

abstract class PhysicalIllness {    Body hostBody     PhysicalIllness(Body hostBody) {      this.hostBody = hostBody    }     void onAcquired() {}    void onHourElapsed() {}    void onBodySleeped() {}    void onGone() {} }  
class Headache extends PhysicalIllness {   void onAcquired() {     this.hostBody.focus -= 10   }   void onHourElapsed() {     this.hostBody.focus += 2   }   // ... } 

Tight coupling actually seems natural to me here. However, it does produce a cyclic/circular dependency, as Body holds references to PhysicalIllness instances and PhysicalIllness also holds a reference to its “parent”, its host body.

Could you people point to any downside of designing things this way, in terms of code maintenance/unit-testing, flexibility, or anything else? I realize there are other answers about this, but since every scenario is different, I’m still unsure if they apply here as well.

## Alternative (without circular dependency)

One alternative would obviously be to remove the coupling by having PhysicalIllness instances be notified of every event by the body (which would pass itself as argument in the process). This requires every method of PhysicalIllness to have a Body parameter:

abstract class Illness {    void onAcquired(Body hostBody) {}    void onHourElapsed(Body hostBody) {}    // ... }  
class Headache extends Illness {   void onAcquired(Body hostBody) {     hostBody.focus -= 10   }   void onHourElapsed(Body hostBody) {     hostBody.focus += 2   }   // ... } 
class Body {   // ...    void onHourElapsed() {     for (PhysicalIllness illness in this.physicalIllnesses) {       illness.onHourElapsed(this);     }   }    // ... } 

I feel like this is clunky and actually less logical, because it means a physical illness can exist outside of a body (you can construct one without a host body), and therefore all methods require the “obvious” host_body parameter.

If I had to summarize this post with one single question: should tight coupling and/or circular dependency between parent/children components be avoided in all situations?

## In algorithm analysis what does it mean for bounds to be “tight”

For example we could say alg(x) runs big omega(n) but this bound is not “tight”.

What is meant by “tight”? Is it that the bound isn’t at its maximum?

So maybe a tighter bound could be big omega (1)??

## Tight bounds for finite de Finetti’s theorem

de Finetti’s theorem roughly states that infinite sequence of exchangeable random variables are conditionally independent. I am looking for tight bounds for de Finetti’s theorem in the following scenario.

Suppose the random variable $$X_i$$ is drawn from $$[n] = \{1, \cdots, n\}$$ for all $$1 \le i \le m$$ (not necessarily i.i.d). Further suppose that the sequence $$X_1, \cdots, X_m$$ is exchangeable meaning that $$\mathbb{P}((X_1, \cdots, X_m)) = \mathbb{P}((X_{\sigma(1)}, \cdots, X_{\sigma(m)}))$$ for any permutation $$\sigma$$.

Are there tight bounds known on the distance (in total variation) between the distribution of the sequence $$(X_1, \cdots, X_m)$$ and the closest mixture of product distributions? In particular, I am interested in bounds that are tight on the size of $$|S|$$.

I have found only one paper that deals with this issue which is this paper by Diaconis and Freedman. Theorem 3 in this paper gives a distance between the distribution of such a sequence mentioned above and the closest product distribution but it is not mentioned if the dependence on $$|S|$$ in their result is necessary. I would appreciate any references that deal with my situation.

## How to reduce tight coupling of some code from a class?

I have an algorithm to create a version for an entity and then I save that version against below 2 entity:

1) Variant

2) Category

Below is the flow of my method execution :

1) Tranformation logic

2) Version

Code :

    public class Variant     {         public int VariantId { get; set; }         public int CategoryId { get; set; }     }      interface IEntityVersion     {         string CategoryIds { get; }         string GetVersion();     }      public class EntityVersion : IEntityVersion     {         private string _categoryIds = "100,101,102" ;         public string  CategoryIds => _categoryIds;         public string GetVersion()         {             //Algorithm to generate version             return C1.1.1;         }     }      public sealed class VariantProcessor     {         private readonly IEntityVersion _entityVersion = new EntityVersion();         private string _myAppConnectionString { get; set; }         private readonly Action<Variant> _transform;         public Variant(string _myAppConnectionString,Action<Variant> transform)         {              _myAppConnectionString = _myAppConnectionString;              _transform = transform;         }         public void Process(Variant model)         {             string variantVersion =string.empty;             try             {                 _transform(model);                 try                 {                     variantVersion = _entityVersion.GetVersion();                     VariantRepo.UpdateVariantVersion(model.VariantId, variantVersion);                 }                 catch (Exception)                  {                    //rollback Transform operation                 }             }             catch (Exception)              {             }             finally             {                UpdateCategoryWithVersion(variantVersion);             }         }          private void UpdateCategoryWithVersion(string version)         {            var categoryIds = _entityVersion.CategoryIds.Split(',').Select(x => int.Parse(x)).ToArray();            for (int i = 0; i < categoryIds.Length; i++)            {               CategoryRepo.UpdateCategoryWithVersion(_connectionString,categoryIds[i], version);            }         }     }      public class VariantRepo     {         public static void UpdateVariantVersion(int variantId, string version)         {            using (SqlConnection connection = new SqlConnection(_connectionString))                 {                   string query = "query";                     using (SqlCommand cmd = new SqlCommand(query, connection))                     {                         connection.Open();                         cmd.Parameters.AddWithValue("@VariantId", variantId);                         cmd.Parameters.AddWithValue("@Version", version);                         cmd.ExecuteNonQuery();                         connection.Close();                     }                 }         }     }      public class CategoryRepo     {         public static void UpdateCategoryVariantMapping(string connectionString , int categoryId, string version)         {            using (SqlConnection connection = new SqlConnection(connectionString))            {               string query = "query";               using (SqlCommand cmd = new SqlCommand(query, connection))               {                   connection.Open();                   cmd.Parameters.AddWithValue("@Version", version);                   cmd.Parameters.AddWithValue("@CategoryId", categoryId);                   cmd.ExecuteNonQuery();                   connection.Close();               }            }          }     }  public interface IVariantProcessor {     void Process(Variant model);     StatisticsModel Process(List<Subvariants> subvariants); }  public class AggregateCalculator : IVariantProcessor {     private string _myAppConnectionString { get; set; }       public void Process(Variant model)         {             _myAppConnectionString = ConfigurationManager.ConnectionStrings["dbConnectionString"].ConnectionString;             new VariantProcessor( _myAppConnectionString,                () => Transform(model));         }     private void Transform(Variant model)     {         //logic     } } 

There are couple of problems in my above code design :

1) Versioning is tightly couple to my Variant Processor.Now tomorrow if i decide to remove version part from my code then I have to come do some refactoring here hence violating Open/Closed principle.

2) VariantProcessor class involves dealing with executing Transformation logic and UpdateCategoryWithVersion,so is it possible to hide all version related logic behind version class to improve readability of VariantProcessor class?

Can someone please give me suggestions or ideas to improve my code design ?