## Why are nested anonymous pure functions shielded from evaluation?

I tried the following code (ignoring the warning messages):

{#, # &, Function[{x}, #], Function[{#}, x], Function[{#}, #]} &@7 (*result: {7, #1 &, Function[{x}, 7], Function[{7}, x], Function[{7}, 7]}*)

I wonder why #& was not changed into 7&. I saw a "possible issue" similar to this mentioned in ref/Slot, but I couldn’t find further documentation about it. Is it a bug or it is specially designed this way?

## First-order mutual-recursive functions Turing-complete or incomplete?

Suppose we have an ML-like programming language with only first-order terms (i.e. no higher-order functions/lambdas; variables cannot be functions). However, the language allows recursion in all forms.

Is it true that this language is Turing-incomplete, but becomes complete if we add basic heap semantics (i.e. pointers and manipulation of RAM-like memory)?

## Where do hash functions get their preimage resistance? [migrated]

I read through this answer and it seemed to make sense to me, but when I try to make a simpler answer to explain it to myself I lose something in the process.

Here is the much simpler hash function I wrote after reading the description of how MD5 works.

1. Take in a single digit integer input as M
2. Define A[0] to be some public constant
3. for int i=1; i<=4; i++:
A[i] = (A[i-1] + M) mod 10
4. return A[4]

This hash function uses the message word in multiple rounds, which is what the answer says leads to preimage resistance. But with some algebra using mod addition we can reduce this "hash function" to just A[i] = (A[0] + i*M) mod 10.

A[1] = (A[0] + M) mod 10 A[2] = (A[1] + M) mod 10    //Substitute A[1] in      = ((A[0] + M) mod 10 + M) mod 10   // Distribute outer mod 10 in      = ((A[0] + M) mod 10 mod 10 + M mod 10) mod 10 // simplify mod 10 mod 10 to mod 10      = ((A[0] + M) mod 10 + M mod 10) mod 10    // Distribute inner mod 10      = ((A[0] mod 10 + M mod 10) mod 10 + M mod 10) mod 10  //factor mod 10 out      = ((A[0] mod 10 + M mod 10) + M) mod 10    // remove redudent paraens      = (A[0] mod 10 + M mod 10 + M) mod 10  // factor mod 10 in      = (A[0] mod 10 mod 10 + M mod 10 mod 10 + M mod 10) mod 10 // simplify mod 10 mod 10 to mod 10      = (A[0] mod 10 + M mod 10 + M mod 10) mod 10   // factor mods 10 out      = (A[0] + M + M) mod 10      = (A[0] + 2M) mod 1 // Repeat with A[3] to find A[3] = (A[0] + 3M) mod 10 and so on

Because A[i] = (A[0] + i*M) mod 10 is not preimage resistant, I’m confused as to what action in a hash function gives it its preimage resistance. To phrase my question another way, if I wanted to write a super simple hash function, what would I need to include to be preimage resistant?

## Problem plotting expression involving Generalized hypergeometric functions $_2F_2 \left(.,.,. \right)$

I’m trying to plot a graph for the following expectation

$$\mathbb{E}\left[ a \mathcal{Q} \left( \sqrt{b } \gamma \right) \right]=a 2^{-\frac{\kappa }{2}-1} b^{-\frac{\kappa }{2}} \theta ^{-\kappa } \left(\frac{\, _2F_2\left(\frac{\kappa }{2}+\frac{1}{2},\frac{\kappa }{2};\frac{1}{2},\frac{\kappa }{2}+1;\frac{1}{2 b \theta ^2}\right)}{\Gamma \left(\frac{\kappa }{2}+1\right)}-\frac{\kappa \, _2F_2\left(\frac{\kappa }{2}+\frac{1}{2},\frac{\kappa }{2}+1;\frac{3}{2},\frac{\kappa }{2}+\frac{3}{2};\frac{1}{2 b \theta ^2}\right)}{\sqrt{2} \sqrt{b} \theta \Gamma \left(\frac{\kappa +3}{2}\right)}\right)$$ where $$a$$ and $$b$$ are constant values, $$\mathcal{Q}$$ is the Gaussian Q-function, which is defined as $$\mathcal{Q}(x) = \frac{1}{\sqrt{2 \pi}}\int_{x}^{\infty} e^{-u^2/2}du$$ and $$\gamma$$ is a random variable with Gamma distribition, i.e., $$f_{\gamma}(y) \sim \frac{1}{\Gamma(\kappa)\theta^{\kappa}} y^{\kappa-1} e^{-y/\theta}$$ with $$\kappa > 0$$ and $$\theta > 0$$.

This equation was also found with Mathematica, so it seems to be correct. I’ve got the same plotting issue with Matlab.

Follows some examples, where I have checked the analytical results against the simulated ones.

When $$\kappa = 12.85$$, $$\theta = 0.533397$$, $$a=3$$ and $$b = 1/5$$ it returns the correct value $$0.0218116$$.

When $$\kappa = 12.85$$, $$\theta = 0.475391$$, $$a=3$$ and $$b = 1/5$$ it returns the correct value $$0.0408816$$.

When $$\kappa = 12.85$$, $$\theta = 0.423692$$, $$a=3$$ and $$b = 1/5$$ it returns the value $$-1.49831$$, which is negative. However, the correct result should be a value around $$0.0585$$.

When $$\kappa = 12.85$$, $$\theta = 0.336551$$, $$a=3$$ and $$b = 1/5$$ it returns the value $$630902$$. However, the correct result should be a value around $$0.1277$$.

Therefore, the issue happens as $$\theta$$ decreases. For values of $$\theta > 0.423692$$ the analytical matches the simulated results. The issue only happens when $$\theta <= 0.423692$$.

I’d like to know if that is an accuracy issue or if I’m missing something here and if there is a way to correctly plot a graph that matches the simulation. Perhaps there is another way to derive the above equation with other functions or there might be a way to simplify it and get more accurate results.

## Break out or bypass php functions

I’m currently doing an online CTF and I have LFI an can read the source code of the upload function. In there I see the following line:

shell_exec('rm -rf ' . uploads/ . '*.p*');

So anytime I upload a .php file, it gets deleted. I tried extensions such as .Php or .PHP but if the extension is not .php, the php code is not executed. It also removes any *.h* file and any .htaccess files.

Is there a way to break out the code so the remove of *.p* file never happens or can I execute .php files without having the file extension being .php?

Update 1: I’m also forced to upload the files by a ZIP-file, the web application automatically unzips the file.

## Relations between deciding languages and computing functions in advice machines

I’m trying to understand implications of translating between functions and languages for P/Poly complexity. I’m not sure whether the following all makes sense. Giving it my best shot given my current understanding of the concepts. (I have a project in which I want to discuss Hava Siegelmann’s analog recurrent neural nets, which recognize languages in P/Poly, but I’d like to understand and be able to explain to others implications this has for computing functions.)

Suppose I want to use an advice Turing $$T_1$$ machine to calculate a function from binary strings to binary strings $$f: {0,1}* \rightarrow {0,1}*$$. $$T_1$$ will be a machine that can compute $$f$$ in polynomial time given advice that is polynomial-size on the length of arguments $$s$$ to $$f$$, i.e. $$f$$ is in P/Poly. (Can I say this? I have seen P/Poly defined only for languages, but not for functions with arbitrary (natural number) values.)

Next suppose I want to treat $$f$$ as defining a language $$L(f)$$, by encoding its arguments and corresponding values into strings, where $$L(f) = \{\langle s,f(s)\rangle\}$$ and $$\langle\cdot,\cdot\rangle$$ encodes $$s$$ and $$f(s)$$ into a single string.

For an advice machine $$T_2$$ that decides this language, the inputs are of length $$n = |\langle s,f(s)\rangle|$$, so the relevant advice for such an input will be the advice for $$n$$.

Question 1: If $$T_1$$ can return the result $$f(s)$$ in polynomial time, must there be a machine $$T_2$$ that decides $$\{\langle s,f(s)\rangle\}$$ in polynomial time? I think the answer is yes. $$T_2$$ can extract $$s$$ from $$\{\langle s,f(s)\rangle\}$$, and then use $$T_1$$ to calculate $$f(s)$$, and then encode $$s$$ with $$f(s)$$ and compare it with the original encoded string. Is that correct?

Question 2 (my real question): If we are given a machine $$T_2$$ that can decide $$\{\langle s,f(s)\rangle\}$$ in polynomial time, must there be a way to embed $$T_2$$ in a machine $$T_3$$ so that $$T_3$$ can return $$f(s)$$ in polynomial time?

I suppose that if $$T_2$$ must include $$T_1$$, then the answer is of course yes. $$T_3$$ just uses the capabilities of $$T_1$$ embedded in $$T_2$$ to calculate $$f(s)$$. But what if $$T_2$$ decides $$L(f)$$ some other way? Is that possible?

If we are given $$s$$, we know its length, but not the length of $$f(s)$$. So in order to use $$T_2$$ to find $$f(s)$$, it seems there must be a sequential search through all strings $$s_f = \{\langle s,r\rangle\}$$ for arbitrary $$r$$. (I’m assuming that $$f(s)$$ is unbounded, but that $$f$$ has a value for every $$s$$. So the search can take an arbitrary length of time, but $$f(s)$$ will ultimately be found.)

One thought I have is that the search for a string $$s_f$$ that encodes $$s$$ with $$f(s)$$ has time complexity that depends on the length of the result $$f(s)$$ (plus $$|s|$$, but that would be swamped when $$f(s)$$ is long).

So now the time complexity does not have to do with the length of the input, but only the length of $$f(s)$$. Maybe $$L(f)$$ is in P/Poly if $$f$$ is in P? (Still confused here.)

Thinking about these questions in terms of Boolean circuits has not helped.

## Reduce with inverse trig functions

c = 5.; b = 3.; a = 2.; len = 7.5; u[t_] := Reduce[   NSolve[(2 c + a Cos[t] - b Cos[u])^2 + (a Sin[t] - b Sin[u])^2 -       len^2 == 0], u, Reals] Plot[u[t], {t, 0, 2 Pi}]

## In the dataflow programming paradigm programs are modeled as directed graphs. Are the edges of the graph variables? And are the vertexes functions?

As I understand it in dataflow programming, programs are structured as directed graphs, an example of which is below

Is it true to say that the arrows (or edges) represent the variables within a program and the vertexes (blue circles) represent programmatic functions? Or is this too much of a simplification?

I am interested in understanding how dataflow languages actually apply graph theory.

## Plotting 2 functions at the same time with Manipulate[]

Here is a code that works well:

f1[x_] := Sqrt[25 - x^2] Manipulate[  Plot[f1[x], {x, from, to}, AspectRatio -> Automatic,    PlotRange -> {{-20, 20}, {-20, 20}},    Epilog -> {Text["From y: " <> ToString[f[from]],       Scaled[{1, 1}], {1, 1}],      Text["To y: " <> ToString[f[to]],       Scaled[{1, 0.96}], {1, 1}]}], {{from, -10, "from x"}, -10, 10,    Appearance -> "Labeled"}, {{to, 10, "to x"}, -10, 10,    Appearance -> "Labeled"}]

output like:

Now I want to plot 2 functions at the same time. I tryied something like this:

f1[x1_] := Sqrt[25 - x1^2] f2[x2_] := -Sqrt[25 - x2^2] Manipulate[  Plot[{f1[x1_], f2[x2_]}, {x1, from1, to1}, {x2, from2, to2},    AspectRatio -> Automatic, PlotRange -> {{-20, 20}, {-20, 20}},   Epilog -> {Text["From y1: " <> ToString[f[from1]],       Scaled[{1, 1}], {1, 1}],      Text["To y1: " <> ToString[f[to1]], Scaled[{1, 0.96}], {1, 1}]},   Epilog -> {Text["From y2: " <> ToString[f[from2]],       Scaled[{1, 1}], {1, 1}],      Text["To y2: " <> ToString[f[to2]], Scaled[{1, 0.96}], {1, 1}]}],  {{from1, -10, "from1 x"}, -10, 10,    Appearance -> "Labeled"}, {{to1, 10, "to x"}, -10, 10,    Appearance -> "Labeled"},   {{from2, -10, "from2 x"}, -10, 10,    Appearance -> "Labeled"}, {{to2, 10, "to2 x"}, -10, 10,    Appearance -> "Labeled"}]

But it does not plots anything. How to correct my code above?

## Is there a name for the class of distance functions that are compatible with k-d trees?

The typical nearest neighbor search implementation for k-d trees prunes branches when the distance between the target and the pivot along the current axis exceeds the smallest distance found so far. This is correct (doesn’t wrongly prune any points) for any Minkowski distance. Is there a broader class of well-known distance functions that are compatible? Formally, I think the necessary and sufficient condition is just

$$d(x,y) \ge |x_i – y_i|$$

for $$x, y \in \mathbb{R}^n$$, $$1 \le i \le n$$.