Is it possible for the runtime and input size in an algorithm to be inversely related?

I’m wondering if it’s possible for algorithms that have monotonically decreasing runtime with the input-size – just as a fun mental exercise. If not, is it possible to disprove this claim? I haven’t been able to come up with an example or counterexample so far, and this sounds like an interesting problem.

P.S. Something like $ O(\frac{1}{n})$ , I guess (if it exists)

Is nonce useless when user input is reflected within an inline script?

I stumbled upon a web app which is accepting user input and putting it into a variable within script tag.

The script tag does have a nonce attribute.

enter image description here

As am working on bypassing the XSS filter, I had this thought that this practice of reflecting user input within an inline script with nonce attribute beats the purpose of using it.

Is my understand correct or am I missing something here ?

How to seperate gui input from world input?

I have a GUI interface and a player that hase code to detect when the mouse is pressed.

When i mess with my GUI buttons the code for the player also fires off; an undesired behavior. The code in the player is meant for mining or using his current item not meant for buttons.

Some people have suggested using _unhandled_input and mouse filters, but i havent gotten either of those to work. Im just wondering what the common thing to do in the situation?

Build PDA for a language with unknown input alphabet

$ L_1 ,L_2$ are regular language. We form a new language $ L_{12}$ as follows: $ L_{12}=\left \{ w_1\cdot w_2|w_1\in L_1\wedge w_2\in L_2\wedge|w_1|=|w_2| \right \}$

In this exersice I am not given any alphabet and I’m required to build PDA for $ L_{12}$ , but by definition $ M=\left \{Q,\sum,\Gamma ,\delta ,q_0,-|,F\right\}$ and I don’t have any alphabet to work with.By intuition if the alphabet is similiar can effect the solution than if it wasn’t similiar.

running RAM on a given input

I understand how RAM commands work but I am unable to understand how we use a given input string and find the output. For instance,

there’s a Random Access Machine which has an input {0,1}*. The program logic is as follows:

1: read

2: store 1

3: read

4: add 1

5: read

6: add 1

7: load 1

8: if a=2 go to 11

9: print 0

10: goto 12

11: print 1

12: end

Now, on the input tape we have i=11101011. How can I find the content of the output tape? What’s the approach?

When we see read, do we only read the first character? If yes, then what exactly do we add 1 to? Is the output also supposed to be in binary?

In a machine learning system, why use differentially private SGD if our input data is already perturbed by a DP mechanism?

I’m trying to implement my own version of a deep neural network with differential privacy to preserve the privacy of the parties involved in the training dataset.

I’m using the method by Abadi et al. proposed in their seminal paper Deep Learning with Differential Privacy as the basis of my implementation. Now I have trouble understanding one thing in this paper. In their method, they propose a differentially private SGD optimisation function and they use an accountant to keep their privacy budget expenditure during each iteration. All of this makes sense: every time you query the data, you need to add controlled noise to it to mitigate the risk of leakage. But before they begin the training process, they add a differentially private PCA layer and filter their data through it.

My confusion is about why we do need to have DP-SGD after this (or the other way around, why DP-PCA when we’re already ensuring DP with our DP-SDG method). I mean, based on post-processing principle, if a mechanism is say (epsilon)-DP, any function performed on the output of that mechanism is also (epsilon)-DP. Now since we’re already applying an (epsilon)-differentially private PCA mechanism on our data, why do we need to have the whole DP-SGD process after that? I understand the problem with local DP and why it’s much more efficient to do global DP on the model instead of the training data, but I’m wondering if we’re already applying DP during the training phase, is it really necessary for the PCA to be DP as well or could we have just used normal DP or another dimensionality reduction method?

Provide a polynomial time algorithm that decides whether or not the language recognized by some input DFA consists entirely of palindromes

Everything needed to know is in the question statement. I believe that the DFA has to be acyclic (meaning its language is finite), which can be checked in polynomial time. However, finding all paths from the start state to an accept state can run in exponential time in worst-case.

Where is the best way to filter user input?

Users can interfere with any piece of data transmitted between the client and the server, including request parameters, cookies, and HTTP headers. Where is the best way to filter user input, on the client side or in the server side ?

If the filtering happening on the client side, users can look at filter implementation and then it can easily circumvented. But what’s about the server side ?

one way function xor with its input

I have the following question: enter image description here

I saw the question here What will i obtain if i apply a xor-ing a one way function and it's input? but from what i understand its only when f(x) is not length preserving, correct me if i’m wrong.

I think its not one way function but I cant build a function that invert it without inverting f(x), can anyone show me how it can be done?

thank you

Can input value escape a JSON object?

I am passing value from input filled directly into a script function inside a JSON object. I was thinking, is it possible that this input can escape this object and can lead to XSS or something.

<script> ... function doSomething(item) { data = {'content':item} } ... </script>  <input id="search" type="text" value="" oninput="doSomething(this.value)"/>