How to Reconcile Apparent Discrepancy in this Algorithm’s Runtime?

I’m currently working through Algorithms by Dr. Jeff Erickson. The following is an algorithm presented in the book:

NDaysOfChristmas(gifts[2 .. n]):      for i ← 1 to n         Sing “On the ith day of Christmas, my true love gave to me”          for j ← i down to 2             Sing “j gifts[j],”         if i > 1             Sing “and”          Sing “a partridge in a pear tree.”  

Here’s the runtime analysis of the algorithm presented by Dr. Erickson:

The input to NDaysOfChristmas is a list of $ n − 1$ gifts, represented here as an array. It’s quite easy to show that the singing time is $ \Theta(n^{2})$ ; in particular, the singer mentions the name of a gift $ \sum_{i=1}^ni = \frac{n(n + 1)}{2}$ times (counting the partridge in the pear tree). It’s also easy to see that during the first $ n$ days of Christmas, my true love gave to me exactly $ \sum_{i=1}^{n}\sum_{j=1}^{i}= \frac{n(n + 1)(n + 2)}{6} = \Theta(n^3)$ gifts.

I can’t seem to grasp how it is possible your $ “$ true love$ “$ had given you $ \Theta(n^3)$ gifts, while a computer scientist looking at this algorithm would say the algorithm’s runtime complexity is $ \Theta(n^2)$ ?

Dr. Erickson also says the name of a gift is mentioned $ \frac{n(n+1)}{2}$ times, which is in $ \Theta(n^2)$ .

What is the mathematics behind Facebook friend suggestion algorithms [closed]

Can anyone help me know how Facebook suggests unknown friends? Or, any references in this regards will do.

What i know is there are infinite ways in which Facebook can suggest friends. One of the ways in which Facebook suggests you friends is that when that person has mutual friends with you or that person is directly or indirectly connected your friendship networks (even if you have no mutual friends with him). In the line of this algorithm, i am thinking of a an algorithm to predict friend suggestion by Facebook using graphs and its union and intersection operations.

My problem is that, i couldn’t find any Mathematical algorithms of the mechanisms of friend suggestion by Facebook. Any reference in this regards will be appreciated.

Examples of SSH key exchange algorithms requiring encryption-capable host keys

In the SSH spec, Section 7.1, key exchange algorithms are distinguished based on whether they require an "encryption-capable" or a "signature-capable" host key algorithm.

If I understood their details correctly, the well-known DH-based key exchanges algorithms such as curve25519-sha256, diffie-hellman-group14-sha256 and ecdh-sha2-nistp256 all require a signature-capable host key algorithm. What are examples of SSH key exchange algorithms that instead require an encryption-capable host key algorithm?

Global-input-local-output p-time algorithms

Are there polynomial-time algorithms whose input is global but output is local in nature? What I have in mind is a problem instead of an algorithm. It’s the satisfiability (SAT) problem. Each clause is global information, because it’s surrounded by a host of satisfiable assignments, or rather their inverses, in the search space. But the goal, a satisfying assignment, is very local, i.e., one point in the search space. I want example problems and solutions that might shed light on SAT.

What factors of the integer dataset being sorted can I change, in order to compare two sorting algorithms?

I am comparing two comparison and binary data structure based sorting algorithms, the Tree Sort, and the Heap Sort. I am measuring the time taken for both algorithms to sort an increasing size of an integer dataset. However, I am wondering if there are any other variables which I can modify, for example standard deviation, in the integer dataset itself that would be of any benefit to my comparison.

What is considered an asymptotic improvement for graph algorithms?

Lets say we are trying to solve some algorithmic problem A that is dependent on input of size n. we say algorithm B that runs in time T(n), is asymptotically better than algorithm C which runs in time G(n) if we have: G(n) = O(T(n)), but T(n) is not O(G(n)).

My question is related to the asymptotic running time of graph algorithms, which is usually dependent on |V| and |E|. Specifically I want to focus on Prim’s algorithm. If we implement the priority queue with a binary heap the run-time would be O(ElogV). With Fibonacci heap we could get a run-time of O(VlogV + E).

My question is do we say that O(VlogV + E) is asymptotically better than O(ElogV)?

Let me clarify: I know that if the graph is dense the answer is yes. But if E=O(V) both of the solutions are the same. I am more interested in what is usually defined as an asymptotic improvement in the case we have more than one variable, and even worse – the variables are not independent (V-1<=E<V^2, since we assume the graph is connected for Prim’s algorithm).

Thanks!

Batching multiple nearest surface queries: Is it faster? Are there better algorithms?

I’m working on an algorithm that computes lots of "nearest point on a triangulated surface" queries in 3d as a way to resample data sets, and I’m wondering if there is any information out there on speeding up these queries. My gut tells me that partitioning the set of query points in a voxel grid or something, and doing them in batches could be a speedup, but I can’t quite see how I could efficiently use that. Also I’m not sure if the time cost of partitioning would balance the search speedup. Is running N independent queries really the best way?

I found that there are papers and research for the all-knn algorithm, but that’s for searching within a single set. And then, those speedups take advantage of the previously computed neighbors or structure within the single set, so I can’t use them. It feels close though.

Any help is appreciated.

simplest algorithms and effective

If I explore all the algorithms and tools necessary for learning from data (training a model with data) and being capable of predicting a numeric estimate (for example house pricing) or a class (for instance the species of an iris flower) given any new example that I didn’t have before. If I start with the simplest algorithms and work toward those that are more complex. The four algorithms represent a good starting point for any data scientist.

Regression has a long history in statistics from building simple but effective linear models of economic, psychological, social or political data, to hypothesis testing for understanding group differences, to modeling more complex problems with ordinal values, binary and multiple classes, count data, and hierarchical relationships, it is also a common tool in data science, a swiss army knife for machine learning that I can use for every problem. Stripped of most of its statistical properties, data science practitioners perceive linear regression as a simple, and an understandable, yet effective algorithm for estimations and in its logistic-regression version, for classification as well.

I would like to know about the simplest algorithm, as a tool in data science for machine learning and linear regression as a simple and understandable, yet effective algorithm for estimations. , if possible in its logistic-regression version, for classification as well.