I am working out step by step and I am stuck on vertex 7. I got that it was a regular vertex and helper(e_i-1) is not a merge vertex so I look for the leftmost edge in the sweep line. My question is, would e6 be considered to the left of it, or is it none? Any already completed examples that I could see would help me understand this greatly.

# Tag: correct

## Is my recursive algorithm for Equivalent Words correct?

Here is my problem.

**Problem** Given two words and a dictionary, find out whether the words are equivalent.

**Input:** The dictionary, D (a set of words), and two words v and w from the dictionary.

**Output:** A transformation of v into w by substitutions such that all intermediate words belong to D. If no transformation is possible, output “v and w are not equivalent.”

I need to write both recursive and dynamic programming algorithm. As for recursion, I came up with this algorithm. Is it correct?

` EquivalentWordsProblem(v, w, D) 1.m <- len (v) 2.n <- len (w) 3.substitutions = [] #array to save substitutions 4.if m != n: 5. return "v and w are not equivalent" 6.else 7.for i <- m to 1 <-1 do 8. for j <- n to j <- 1 do 9. if v[i] != w[j]: 10. substituted_word <- v[1…i-1]+v[j] #we substitute v[i] for w[j] 11. if substituted_word in D: 12. substitutions.append(substituted_word) 13. return EquivalentWordsProblem(v[1…m-i], w, D) #recur on the string of length m - i 14. else: return EquivalentWordsProblem(v[1…m-1], w, D) #recur on the string decreasing its length by 1 15.if len(substitutions) != 0: 16. return substitutions 17.else 18. return (“v and w are not equivalent”) `

## Why does my digital bank need my phone date and hour to be correct?

I’m not from Information Security or any IT related area. But I want to know if there is any security reason for my digital bank to demand my phone to be on "Automatic Date & Time"?

For example, if I’m abroad, i cannot transfer some money to a friend for because it says my date & time is incorrect.

Is that a badly programmed software or does it have a purpose?

## Why does TempDB spill happen even though statistics are correct?

I read a great article published by Brent Ozar and came up with some questions related to memory grant. I am unable to address my questions in the comment section of his article, so I thought to get any help from here.

- Question:
**How much data is spilled into disk?****400 MB**or**60 MB(7643KB*8)?**

in the article he states:

And no matter how many times I update statistics, I’ll still get a ~

400MBspill to disk.

I am kinda confused here(

- Question:
**If everything is okay with estimates, stats are up to date, box has sufficient memory, and no queries were running at that time, then why does spill to disk happen?**

look at the estimated number of rows versus the actual number of rows. They’re identical. The stats are fine.

I’m not using a small server, either: my virtual machine has 32GB RAM, and I’ve allocated 28GB of that to SQL Server. There are no other queries running at the same time – it’s just one lonely query, spilling to disk…

## Is it correct or incorrect to say that an input say $C$ causes an average run-time of an algorithm?

I was going through the text Introduction to Algorithm by Cormen et. al. where I came across an excerpt which I felt required a bit of clarification.

Now as far as I have learned that that while the *Best Case* and *Worst Case* time complexities of an algorithm arise for a certain physical input to the algorithm (say an input $ A$ causes the worst case run time for an algorithm or say an input $ B$ causes the best case run time of an algorithm , asymptotically), but there is no such physical input which causes the average case runtime of an algorithm as the average case run time of an algorithm is by it’s definition the runtime of the algorithm averaged over all possible inputs. It is something I hope which only exists mathematically.

But on the other hand inputs to an algorithm which are neither the best case input nor the worst case input is supposed to be somewhere in between both the extremes and the performance of our algorithm is measured on them by none other than the average case time complexity as the average case time complexity of the algorithm is in between the worst and best case complexities just as our input between the two extremes.

Is it correct or incorrect to say that an input say $ C$ causes an average run-time of an algorithm?

The excerpt from the text which made me ask such a question is as follows:

In context of the analysis of quicksort,

In the average case, PARTITION produces a mix of “good” and “bad” splits. In a recursion tree for an average-case execution of PARTITION, the good and bad splits are distributed randomly throughout the tree. Suppose, for the sake of intuition, that the good and bad splits alternate levels in the tree, and that the good splits are best-case splits and the bad splits are worst-case splits. Figure(a) shows the splits at two consecutive levels in the recursion tree. At the root of the tree, the cost is $ n$ for partitioning, and the subarrays produced have sizes $ n- 1$ and $ 0$ : the worst case. At the next level, the subarray of size $ n- 1$ undergoes best-case partitioning into subarrays of size $ (n-1)/2 – 1$ and $ (n-1)/2$ Let’s assume that the boundary-condition cost is $ 1$ for the subarray of size $ 0$ .

The combination of the bad split followed by the good split produces three sub- arrays of sizes $ 0$ , $ (n-1)/2 – 1$ and $ (n-1)/2$ at a combined partitioning cost of $ \Theta(n)+\Theta(n-1)=\Theta(n)$ . Certainly, this situation is no worse than that in Figure(b), namely a single level of partitioning that produces two subarrays of size $ (n-1)/2$ , at a cost of $ \Theta(n)$ . Yet this latter situation is balanced!

## Is this Correct, the existence of cryptography requires $UP \cap Co-UP \not\subseteq BPP$

Is this Correct, the existence of cryptography requires $ UP \cap Co-UP \not\subseteq BPP$ ? Or does it require $ UP \not\subseteq BPP$ ?

## How to correct this errors? I need advise

Give me advise

## Is this the correct “standard form” of nonlinear programming (optimization) problem and if it is why it’s in this form?

Rather a simple question I guess, though makes me wonder. The standard form I’ve found in the book (and on wiki) is something like this:

$ min f(x)$

$ s.t.$

$ h_i(x) = 0$

$ g_i(x) <= 0$

Is this considered a “standard form” for nonlinear optimization problems? And if it is why it’s defined like this? Why it has to be exactly the min of the function and why constraints have to be either equal or less than 0 or equal to 0? I couldn’t find any answer why it is as it is actually. Is there some important thing why it couldn’t be max actually for example?

## Joining points of polygon in correct order

I have a point’s array of some 2d shape(polygon). The polygon could be either crossed or convex, I don’t know it. And I want to join those points in the correct order.

My first thought was to take some point as an origin and start looking for the closest one and so on. But this approach doesn’t work for some crossed polygons, for example: on **Image1**, it would go from `x3`

to `x5`

because it is closer than to `x4`

, but what I really want is to join `x1-x2-x3-x4-x5-x6`

.

After some thinking, I’ve realized that my requirement `correct order`

is very unclear because on **Image2** red lines are in the correct order too. So let’s add the requirement that polygon lines shouldn’t at least cross.

I really confused, can someone point me in what direction I should move? Maybe it’s some famous algorithm but I just don’t know how to google it properly. Thanks.

**Image1**:

**Image2**:

## Are My Answers to This Hamming Code Example Correct?

I have attempted to answer this question on Hamming code but i am very new to it and would appreciate feedback on my answers, Thank you!

## Question

## My Answers