## Distributional error probability of deterministic algorithm implies error probability of randomized algorithm?

Consider some problem $$P$$ and let’s assume we sample the problem instance u.a.r. from some set $$I$$. Let $$p$$ be a lower bound on the distributional error of a deterministic algorithm on $$I$$, i.e., every deterministic algorithm fails on at least a $$p$$-fraction of $$I$$.

Does this also imply that every randomized algorithm $$\mathcal{R}$$ must fail with probability $$p$$ if, again, we sample the inputs u.a.r. from $$I$$?

My reasoning is as follows: Let $$R$$ be the random variable representing the random bits used by the algorithm. \begin{align} \Pr[ \text{ \mathcal{R} fails}] &= \sum_\rho \Pr[ \text{ \mathcal{R} fails and R=\rho }] \ &= \sum_\rho \Pr[ \text{ \mathcal{R} fails} \mid R=\rho] \Pr[ R=\rho ] \ &\ge p \sum_\rho \Pr[ R=\rho ] = p. \end{align} For the inequality, I used the fact that once we have fixed $$R = \rho$$, we effectively have a deterministic algorithm.

I can’t find the flaw in my reasoning, but I would be quite surprised if this implication is true indeed.

## The distributional derivatives of $g(x_1,x_2):=sign x_1 -sign x_2$

i would like to proof the following. If $$g(x_1,x_2):=sign x_1 -sign x_2\, \, (x=(x_1,x_2) \in \mathbb{R}^2)$$, then $$\partial^{(1,0)}g, \partial^{(0,1)}g \notin L_{1,loc}(\mathbb{R}^2)$$, but $$\partial^{(1,1)} \in L_{1,loc}(\mathbb{R}^2)$$. I don’t see the solution. Can eventually someone help me?