## Would first encrypting a password make slow hashing algorithms unnecessary?

If you first encrypt a password using a secure key, and then hash the result, and both algorithms are fast, say sha_256(salt+aes_256(password, secure_key)), would that make the hash expensive to brute-force without making it expensive to generate?

## Design an algorithms to find the index of the first occurrence of an element greater than that key

Question: Design an efficient algorithm that takes a sorted array and a key and finds the index of the first occurrence of an element greater than that key.

The question above is taken from Elements of Programming Interviews in Python, page $$146$$. It is a variant question.

I think the question does not mention the output if the key given is the largest element in the given array.

I code using python.

def first_k_bigger(A,key):     i, j = 0, len(A) - 1     while i < j:         mid = (i+j)//2         if A[mid] <= key:             i = mid + 1              else:             j = mid     return i 

Idea: I use binary search to find such index. First, we find the midpoint of the array, set two pointers $$i$$ and $$j$$ to be beginning and ending of the array and compare the midpoint element with the key given.

If the midpoint is less than or equal to the key, then the key must be on the right side of midpoint. So, we set the pointer $$i$$ to be midpoint index $$+1.$$

Otherwise, if the midpoint is bigger than the key, then the key must be on the left side of midpoint. So, we set $$j$$ to be midpoint index.

Does my code above cover all possibilities?

## Why maximum-matching algorithm falls into the category of fill-reducing algorithms?

My understanding is that “maximum matching” (or “maximum transversal”) are algorithms to pre-order matrix to increase the numerical stability. In Timothy Davis’ book Direct Methods for Sparse Linear Systems, 2006, however, this algorithm is put in Chapter 7 which is titled “Fill-reducing ordering”. In his more recent paper A survey of direct methods for sparse linear systems, 2016, maximum matching was also placed in section 8 which is titled “Fill-reducing orderings”.

So far, my had an impression that reordering algorithms can be categorized into 3 classes:

• for numerical stability: maximum transversal, etc
• for fill-reducing: AMD, etc
• for work-reducing or parallelism-increasing: BTF, BBD, etc

I have problems in understanding putting above 3 classes into a single category called fill-reducing…

## Approximation algorithms for indefinite quadratic form maximization with linear constraints

Consider the following program: \begin{align} \max_x ~& x^TQx \ \mbox{s.t.} ~& Ax \geq b \end{align} where $$Q$$ is a symmetric (possibly indefinite) matrix and the inequality is element-wise and constrains feasible solutions to a convex polytope.

This is NP-hard to solve, but what are known approximation results?

A relevant result is given by (Kough 1979). It is shown that this program can be optimized using Benders’ decomposition to within $$\epsilon$$ of the optimum. However, the paper does not seem to clearly specify what this means, or the complexity of the procedure.

I believe the $$\epsilon$$-approximation is in the usual sense employed in the field of mathematical programming, that is, is $$OPT$$ is the optimal value of the program, $$ALG$$ is the result of the above procedure and $$MIN$$ is the minimal value attainable by a feasible solution, $$\frac{ALG-MIN}{OPT-MIN} \geq (1-\epsilon).$$ Or something of the sort.

Questions:

• Is the mentioned procedure a polynomial-time algorithm?
• Are there known polynomial-time algorithms yielding approximations to the above program in the traditional sense, i.e. $$ALG \geq \alpha OPT$$ for some $$\alpha < 1$$, constant or not.

Kough, Paul F. “The indefinite quadratic programming problem.” Operations Research 27.3 (1979): 516-533.

## Can algorithms of arbitrarily worse complexity be systematically created?

We’ve all seen this:

Can we get worse?

Part 1: Can mathematical operations of increasing orders of growth be generated, with or without Knuth’s up-arrow notation?

Part 2: If they can, can algorithms of arbitrary complexities be systematically generated?

Part 3: If such algorithms can be generated, what about programs implementing those algorithms?

## Why don’t raytracing algorithms include the speed of light?

From what I understand about ray-tracing, it is instantaneous in its speed from the light source to the user. Is there a type of ray-tracing where the “rays” move at the speed of light or are affected by gravity? Such methods would be useful in simulating large scale systems (like planets).

Also, can the same method be applied to sound?

## Inefficiency of search algorithms for intranet or corporate websites caused by poor design and/or implementation

I noticed recently that some of the search features on corporate websites and intranets seem to have implemented some of the search algorithms that are commonly associated with Facebook Graph Search or Google’s SEO ranked search results.

This is commonly seen when a user enters a very specific keyword but the exact matching results are not returned or not ranked highly on the search results, whereas a partially matching result will be ranked highly.

My suspicion is that with many organizations creating social networks and doing extensive analytics on internal traffic have the tendency to implement the types of search algorithms that place more weight on criteria such as recency and number of existing page views when returning search results. Unfortunately this has also created the side-effect of exact matching keywords (e.g. document names and other exact search phrases) not returning at the top of the search result.

This is despite the fact that many of these search features allow a user to filter results by things like document type and other meta data, which should allow a more specific or targeted results returned.

Has anyone else experienced this during their research and have you found the cause for this? Other research or examples from end users would also be helpful.

## Approximate algorithms for class P problems

As a part of my Algorithm course we studied Approximate Algorithms for NP-complete or NP-hard problems, e.g. “set cover”, “vertex cover”, “load balancing”, etc. My professor asked us as an extra activity to learn an approximate algorithm for a P problem. I searched a lot on google but all I found were NP problems.

I was wondering if anybody can help and tell me a approximate algorithm for a problem with already a exact polynomial algorithm? It will be very much better if the algorithm be short and easy.

## Is it efficient to use large scale data algorithms on disk?

Is it efficient to use algorithms like locality sensitive hashing and bloom filters on disk instead of memory for very large datasets where even these structures cannot be saved in memory?

## Non-Mathematical Algorithms that behave like Taylor Series

Are there tasks or problems for which no algorithm can give a complete or precise answer but there do exist algorithms that can give increasingly “better” answers if permitted to execute for longer and longer?

For example, computing the value of $$e$$ by means of a Taylor Series yields an increasingly accurate answer, but at no finite point in time will the exact value of $$e$$ be computed.

In particular, I’m wondering if there are problems or tasks like this one that are not mathematical in nature. Or is my understanding flawed and indeed all algorithms are in some sense mathematical in nature?

Thanks for reviewing my question.