What properties of a discrete function make it a theoretically useful objective function?

A few things to get out of the way first: I’m not asking what properties the function must have such that a global optimum exists, we assume that the objective function has a (possibly non-unique) global optimum which could be theoretically found by an exhaustive search of the candidate space. I’m also using "theoretically useful" in a slightly misleading way because I really couldn’t understand how to phrase this question otherwise. A "theoretically useful cost function" the way I’m defining it is:

A function to which some theoretical optimisation algorithm can be applied such that the algorithm has a non-negligible chance of finding the global optimum in less time than exhaustive search

A few simplified, 1-dimensional examples of where this thought process came from: graph of a bimodal function exhibiting both a global and local maxima

Here’s a function which, while not being convex or differentiable (as it’s discrete), is easily optimisable (in terms of finding the global maximum) with an algorithm such as Simulated Annealing.

graph of a boolean function with 100 0 values and a single 1 value

Here is a function which clearly cannot be a useful cost function, as this would imply that the arbitrary search problem can be classically solved faster than exhaustive search.

graph of a function which takes random discrete values

Here is a function which I do not believe can be a useful cost function, as moving between points gives no meaningful information about the direction which must be moved in to find the global maximum.

The crux of my thinking so far is along the lines of "applying the cost function to points in the neighbourhood of a point must yield some information about the location of the global optimum". I attempted to formalise (in a perhaps convoluted manner) this as:

Consider the set $ D$ representing the search space of the problem and thus the domain of the function and the undirected graph $ G$ , where each element of $ D$ is assigned a node in $ G$ , and each node in $ G$ has edges which connect it to its neighbours in $ D$ . We then remove elements from $ D$ until the objective function has no non-global local optima over this domain and no plateaus exist (i.e. the value of the cost function at each point in the domain is different from the value of the cost function at each of its neighbours). Every time we remove an element $ e$ from $ D$ , we remove the corresponding node from the graph $ G$ and add edges which directly connect each neighbour of $ e$ to each other, thus they become each others’ new neighbours. The number of elements which remain in the domain after this process is applied is designated $ N$ . If $ N$ is a non-negligible proportion of $ \#(D)$ (i.e. significantly greater than the proportion of $ \#(\{$ possible global optima$ \})$ to $ \#(D)$ ) then the function is a useful objective function.

Whilst this works well for the function which definitely is useful and the definitely not useful boolean function, this process applied to the random function seems incorrect, as the number of elements that would lead to a function with no local optima IS a non-negligible proportion of the total domain.

Is my definition on the right track? Is this a well known question I just can’t figure out how to find the answer to? Does there exist some optimisation algorithm that would theoretically be able to find the optimum of a completely random function faster than exhaustive search, or is my assertion that it wouldn’t be able to correct?

In conclusion, what is different about the first function that makes it a good candidate for optimisation to any other functions which are not.

Can a Keypass file theoretically be cracked offline?

So you create a .kbdx file, protected by a password.

AFAIK in asymmetric key schemes and in WPA-AES brute-forcing consists of:

  • try a random password on the private key / on the router
  • if it doesn’t log you in, try another.

So you immediately know did you hit the correct password.

What about a password manager’s database? You know nothing about the content of the file. How do you know you did manage to crack it?

Estimating P in Amdahl’s Law theoretically and in practice

In parallel computing, Amdahl’s law is mainly used to predict the theoretical maximum speedup for program processing using multiple processors. If we denote the speed up by S then Amdahl’s is given by the formula :

S=1/((1-P)+(P/N)

where P is the proportion of a system or program that can be made parallel, and 1-P is the proportion that remains serial. My question is how can we compute or estimate P for a given program ?

More specifically, my question has two parts:

how can we compute P theoretically? how can we compute P in practice? I know my question could be easy but I am learning.

ref : https://www.techopedia.com/definition/17035/amdahls-law

Theoretically, If you know the hash of a program one intends to install and you generate another file that hashes to that value what could you do?


If I know the hash of a program you intend to install is d306c9f6c5…, if I generate some other file that hashes to that value, I could wreak all sorts of havoc. – from https://nakamoto.com/hash-functions/

Theoretically, If you know the hash of a program one intends to install and you generate another file that hashes to that value what could you do?

Theoretically, what if I were to change some magic numbers in, say, AES

Purerly theoretically. I know it’s a bad idea to try to invent your own encryption and that’s not the intention here. Just a thought experiment.

Say, I change some or all of the magic numbers used in, say, AES (but this would could also apply to other algorithms) and create “AES-RobIII”. Ofcourse this would be incompatible with the current AES algorithm.

  1. First: I know these numbers are chosen (usually) very carefully (sometimes maybe even crafted so that they can contain a secret backdoor, but I’m not interested in conspiracy theories etc). How are these numbers chosen generally and I assume there are many (‘infinite’) variations possible? Do they come from a ‘RNG’ (tuned to some specific rules)? Are they chosen ‘manually’? I know they have to satisfy some polynomial(s) but do they (the cryptographers designing the algorithm) just start with a random seed or pick a specific number or…?
  2. Since these magic numbers are, along with the algorithm, out in the open (as any good encryption algorithm should be), I could theoretically pad the encrypted message with them and ‘load’ these tables with the numbers before the algorithm is kicked off to encrypt/decrypt, right?
  3. Then why have these numbers as constants in the algorithm (unless size is a priority ofcourse) in the first place? Or why don’t we have, say, AES-I, AES-II and AES-III with the only difference being another set of magic numbers? You could even consider the table(s) of magic numbers used a sort of “extra key” so, say, a company could use their generated (similar to how private/public keypairs are generated for example) set of numbers internally for extra added security when stuff would leak. I realise it won’t add much (if anything at all) but added complexity but I was just wondering.

Again, I realise this won’t add any advantage of any meaning (probably), if at all, but I was just wondering. Also this question is based on the assumption that if I change any of the magic numbers and encrypt something with it and then try to decrypt it (with the same, altered, magic numbers) it would still decrypt correctly.

How can Turing complete machines exist theoretically if the halting problem is undecidable

As the question says, if I input on the tape of a Turing complete machine a program that solves the halting problem with the correct inputs the program will never end its execution regardless of memory and time. Isn’t the halting problem a computational problem that can’t be executed by a Turing complete machine so that it’s halts sometime?

Is it theoretically possible to dynamically grow array size in stack memory?

I was wondering, given the usual stack memory functioning, whether it is possible for an array like primitive type allowed to grow in size to exist.
The functioning of such primitive type is as follows.

  • Any one function is allowed to have only one such primitive variable.
  • Appending to such primitive variable is allowed only within the function that it was initialized in.
  • If the function that such primitive variable was initialize in is not the last in the stack, such primitive variable is not allowed to grow in size.

enter image description here

I was wondering whether:

  • Such primitive type is theoretically possible.
  • Whether it is something that is already in use.

Would it be theoretically possible to do a block-level backup of /dev/media without root?

I have nothing against root except eFUSE. Whose ever idea eFUSE was . . .

As far as I understand:
Because of amateur users and clueless minimalists who don’t know how to utilise tools that are intended for power users, the development team of TWRP has controversially decided that their NANDROID backup does not include /data/media.

More information and sources in these comments:
1. Best practice for backing up /data/media?
2. Best practice for backing up /data/media?

What I want is a block-level image backup of the entire phone, so that I can return to the exact spot later on.

I want that block-level backup to include the /data/media partition.

Now, I have found out it is possible via ADB, but that requires root. If there is really no other way, I would consider taking the risk of rooting the phone after doing any other possible backup method (adb app backup, file backup, etc.)

But my question is:

Is it technically possible to do a block-level (dd) backup of the /data/media partition without root?

Had the developement team of TWRP decided to include /data/media, pleasing power users and maximalists instead of minimalists and amateur users, would it actually have been possible without root?