Ideal time complexity in analysis of distributed protocol

I need some explanation about the definition of ideal time complexity. My textbook says:

The ideal execution delay or ideal time complexity, T: the execution delay experienced under the restrictions “Unitary Transmission Delays” and “Syn- chronized Clocks;” that is, when the system is synchronous and (in the absence of failure) takes one unit of time for a message to arrive and to be processed.

What is intended for “Syncronized Clocks”?

Take for example broadcast problem and flooding protocol.

In this protocol each uninformed node wait that some informed node (at the beginning only the source) send to it the information and next it resend the info to all neighbors.

Now the ideal time complexity of this protocol is at most the eccentricity of the source and so at most the Diameter of the comunication graph.

Now if the ideal time complexity is this, necessarily al nodes send message to neighbor in parallel, correct?

and we are assuming that:

  • The source send message to each neighbor => 1 unit of time
  • The neighbors of the source send message to them neighbors => 1 time

and so on.. until we reach the most far away node from the source.

It’s a correct view?

Smoothed analysis of the Partition problem

I am studying smoothed analysis and trying to apply it to the Partition decision problem: given $ n$ real numbers with a sum of $ 2 S$ , decide whether there exists a subset with a sum of exactly $ S$ .

The common definition of the smoothed runtime complexity of an algorithm is: given $ n$ and $ \sigma$ , the smoothed runtime of an algorithm is the maximum, over all inputs of size $ n$ , of the runtime on the input when it is perturbed by a perturbation of size $ \sigma$ , e.g. by adding to each input a number selected randomly from a normal distribution with standard deviation $ \sigma$ , or from any distribution with support $ [0,\sigma]$ .

If I apply this definition to the Partition problem, it seems that for any $ \sigma > 0$ , the runtime complexity is $ O(1)$ , since for any random noise that is added to the original numbers – no matter how small – the answer is “no” with probability 1.

This is strange, since in the more common examples of smoothed analysis, the runtime complexity depends on $ \sigma$ , and here it does not.

Is there something I misunderstood? What is the smoothed runtime complexity of Partition?

With Great Weapon Fighting, is this analysis of the Double Scimitar’s damage correct?

Traditionally, the Greatsword and Greataxe are considered the strongest two-handed weapons (unless you take the Polearm Master feat, where Glaives and Halberds rock out). Barbarians and Half-Orcs benefit more from the Greataxe’s 1d12, while others prefer the Greatsword’s 2d6.

I want to compare the new Eberron’s Double Scimitar with other swords.

A double-bladed scimitar is a martial weapon, weighing 6 pounds and dealing 2d4 slashing damage on a hit. It has the two-handed property and the following special property:

  • If you attack with a double-bladed scimitar as part of the Attack action on your turn, you can use a bonus action immediately after to make a melee attack with it. This attack deals 1d4 slashing damage on a hit, instead of 2d4.

For a Fighter or a Paladin with Great Weapon Fighting, I was able to build a graph that compared the Greatsword with it. Since Fighters have ASI at levels 4 and 6, they can usually reach a +5 STR modifier very early, and the Greatsword only becomes the strongest weapon at level 20, when the Fighter does 4 attacks per turn.

enter image description here

If you accept feats, the Double Scimitar is not eligible for Great Weapon Master (no Heavy property) or Polearm Master, which could drastically change the average damage output graph of other two-handed weapons. However, without feats, is my graph correct, and does the Double Scimitar outperform other options?

Tight analysis for the ration of $1-\frac{1}{e}$ in the unweighted maximum coverage problem

The unweighted maximum coverage problem is defined as follows:

Instance: A set $ E = \{e_1,…,e_n\}$ and $ m$ subsets of $ E$ , $ S = \{S_1,…,S_m\}$ .

Objective: find a subset $ S’ \subseteq S$ such that $ |S’| = k $ and the number of covered elements is maximized.

The problem is NP-hard, but a simple greedy algorithm (at each stage, choose a set which contains the largest number of uncovered elements) achieves an approximation ratio of $ 1-\frac{1}{e}$ .

In the following post, there is an example of when the greedy algorithm fails.

Tight instance for unweighted maximum coverage problem?

I wish to prove that the approximation ration for the greedy algorithm is tight. That is, the greedy algorithm is not an $ \alpha-$ approximation ratio for any $ \alpha > 1-\frac{1}{e}$ .

I think that if I will find, for any $ k$ , (or for an ascending series of $ k’s$ ), an instance where the number of elements covered by greedy algorithm is $ 1-(1- \frac{1}{k})^k$ times the number of elements covered by the optimal solution, the tightness of the ratio will be proved.

Can someone give a clue for such instances?

I thought of an initial idea: let $ E = \{ a_1 ,…a_n,b_1,…,b_n,…,k_1,…,k_n\}$ , a set with $ n\cdot k$ elements. Let $ S$ include $ k$ sets of $ n$ elements each, $ A = \{ a_1 ,…a_n\},…,K= \{k_1,…,k_n\}$ . The optimal solution will select these $ k$ sets and cover all the elements in $ E$ . Now I want to add $ k$ sets to $ S$ , that will be the solution the greedy algorithm will find, and will cover $ 1-(1- \frac{1}{k})^k$ of the elements in $ E$ . The first such set, of size $ n$ : $ S_1 = \{a_1,…a_\frac{n}{k},b_1,…b_\frac{n}{k},…,k_1,…k_\frac{n}{k} \}$ ($ \frac{n}{k}$ elements from each of the first $ k$ sets). The second such set, of size $ n – \frac{n}{k}$ : $ S_2 = \{a_\frac{n}{k},…a_{\frac{n}{k}+ (n – \frac{n}{k})\cdot\frac{1}{k}},b_\frac{n}{k},…,b_{\frac{n}{k}+ (n – \frac{n}{k})\cdot\frac{1}{k}},…,k_\frac{n}{k},…,k_{\frac{n}{k}+ (n – \frac{n}{k})\cdot\frac{1}{k}} \}$ , (that is, $ (n – \frac{n}{k})\cdot\frac{1}{k}$ elements from each of the first $ k$ sets) and so on till we have $ k$ additional such sets.

I don’t think this idea works for every $ k$ and $ n$ , and I’m not sure it’s the right approach.

Thanks.

How are Complex Numbers and Complex Analysis used in CS?

The world is 3D and data of it is usually (as far as I know) represented and processed with real numbers. I’ve seen very few cases where complex numbers are used in programming and none when it comes to processing anything that doesn’t explicitly require complex numbers.

How are complex numbers used it Computer Science and Programming? Which areas in CS/IT use it?

Sensitivity analysis of $MST$ edges

I am working on the following exercise:

Consider an undirected graph $ G = (V,E)$ . Let $ T^* = (V,E_{T^*})$ be a $ MST$ and let $ e$ be an edge in $ E_{T^*}$ . We define the set of all values that can be assigned to $ w_e$ such that $ T^*$ remains a MST as $ I_e$ .

  1. Show that $ I_e$ is an interval.
  2. Devise an efficient algorithm to calculate $ I_e$ for a given edge $ e$ .
  3. Devise an efficient algorithm that determines all $ I_e$ in one step. It should be more efficient than repeatedly using the algorithm from 2.

I did the following:

  1. Consider an edge $ e$ in $ E_{T^*}$ . Delete it from $ G$ and find the new lowest weighted edge connecting the resulting components, say $ e’$ . The upper bound for $ I_e$ is $ w(e’)$ . The lower bound would be $ -\infty$ .
  2. Use the algorithm sketched in 1.
  3. I do not know what to do here. Could you help me?

Time-complexity analysis using constants method

I am trying to analyze some algorithms using constants methods and I’m not sure if i’m doing it right so I’m posting here the algorithm and my attempt:

Algorithm1(A[0..n], n)                | Cost           Freq s = 0                                 | c1             1 for i = 1..n                          | c2             n     if A[i] > 0                       | c3             n - 1         for j = 1..i                  | c4             sum_{i = 1}^n t_i*i             if A[j] mod 2 == 0        | c5              x                 for k = 1..j          | c6              y                     s = s + i + j + k | c7              z         } return s 

x = $ sum_{i = 1}^n sum_{j = 1}^i t_i*p_j*i*(j – 1)$ y = $ sum_{i = 1}^n sum_{j = 1}^i sum_{k = 1}^j t_i*p_j*i*(j – 1)*k$ z = $ sum_{i = 1}^n sum_{j = 1}^i sum_{k = 1}^j t_i*p_j*i*(j – 1)*(k – 1)$

With $ t_i$ the boolean value of the condition if A[i] > 0, $ t_i\in\{0, 1\}$ With $ p_j$ the boolean value of the condition if A[i] mod 2 == 0, $ p_j\in\{0, 1\}$

Using Statistical Analysis and Tests on Users’ data

I have a Master’s degree in HCI. As a master’s student, I had to pass some Stats courses at university. Within these courses, we were familiarized with tests like T-Tests and we were required to run them on datasets from participants to compare different prototypes.

My question is are these tests used in real-world projects which you have experienced in your career so far?

Analysis – Help with HTaccess (noindexing PDF Files) [on hold]

I am trying to analyse a htaccess. Considering I work in digital marketing, I understand 5% of all those rules and codes.

I would like a hand on everything but what I am most puzzled about is why the code is not enough to noindex the PDF files. There is a specific piece of code about it but I am wondering whether it conflicts with something else.

Thank you Fabrizio