Simple and Universal way to password-protect existing webservices that are exposed to internet

There are many tools, devices and programs that by default run a http server and expose a user interface on port 80. Even my coffee machine has a web ui that it provides on port 80.

Now, it’s easy to make these existing webservers available through the internet by simply doing port-forwarding on the internet facing NAT.

I want to do this, but I want to password protect access to them in a simple, generic and secure way.

On simple way would be to just NOT expose them and access them only through a VPN connection. Currently I’m doing this – but I want to be able to access the services without a vpn tunnel from anywhere in the web.

So, let’s say I have three http services in my Lan that I locally can access on

CoffeeMachine:80 MyLightSwitch:80 ToiletFlush:80 

Now I want to be able to access them over the internet by going to

http://mystaticIP/coffeemachine http://mystaticIP/lightswitch http://mystaticIP/toilet 

But for all of them, I want that they are ONLY accessible after some form of user-password authentication.

I don’t need individual users/passwords for the different servers. Can be all the same.

What’s an easy but yet secure way to expose all these three services to the internet, without having to tamper with the http servers on these devices themselves? (by secure I mean that without knowing the password it won’t create a trivial security hole. I don’t worry about man-in-the-middle attacks or so).

Tools I have available to solve this:

• Adding an additional server running any linux distro/services to the local network
• Set portforwarding on my NAT

Universal hashing – insert / search / delete

I don’t understand the highlighted text below in CLRS 3rd Ed.:

What do they mean by “containing $$\mathcal{O}(m)$$ INSERT operations“? What contains them? And why $$\mathcal{O}(m)$$?

Reduce $L_c=\{\langle M_1 \rangle, \langle M_2 \rangle):L(M_1)\cap L(M_2)\neq \emptyset\}$ to Universal language

How to reduce $$L_c=\{\langle M_1 \rangle, \langle M_2 \rangle):L(M_1)\cap L(M_2)\neq \emptyset\}$$ to Universal language?

My try: Construct a Turing machine $$N$$ using a Turing Machine $$U$$ that decides universal language as subroutine to decide $$L_c$$. $$N$$, on any input $$<\langle M_1 \rangle, \langle M_2 \rangle >$$:
$$1.$$ Construct a program that generates word $$w \in \sum^\ast$$ in canonical order.
$$2.$$ Run $$U$$ on $$\langle M_1, w\rangle$$ and $$\langle M_2, w\rangle$$.
$$3.$$ If $$U$$ accepts both, accept.
$$4.$$ If not, back to $$1$$.

It seems does not work. Because if $$L(M_1)\cap L(M_2)= \emptyset$$, $$N$$ will loop forever(it just can’t find such $$w$$).

Is there an error in the universal monster rules for table:natural attacks by size?

I am working on a druid character and just getting around to wild shapes natural attacks and I spotted what should be an error in its damage progression under the universal monster rules which conflicts with the newer damage dice progression rules.

Official Damage Dice Progression Chart  \begin{array}{l|l} \text{Level} & \text{Dice}\ \hline 0 & 0 \ 1 & 1d1 \ 2 & 1d2\ 3 & 1d3\ 4 & 1d4\ 5 & 1d6\ 6 & 1d8\ 7 & 1d10\ 8 & 2d6\ 9 & 2d8\ 10 & 3d6\ 11 & 3d8\ 12 & 4d6\ 13 & 4d8\ 14 & 6d6\ 15 & 6d8\ 16 & 8d6\ 17 & 8d8\ 18 & 12d6\ 19 & 12d8\ 20 & 16d6\ \end{array} 

Here we have the chart for Bite

 \begin{array}{l|l|l} \text{Size} & \text{Dice Level} & \text{Dice}\ \hline Fine & 1 & 1d1\ Diminutive & 2 & 1d2\ Tiny & 3 & 1d3\ Small & 4 & 1d4\ Medium & 5 & 1d6\ Large & 6 & 1d8\ Huge & 8 & 2d6\ Gargantuan & 9 & 2d8\ Colossal & 12 & 4d6\ \end{array} 

Here we can clearly see that huge to gargantuan is only 1 and not 2, and from gargantuan to colossal is 3 instead of 2. If gargantuan was changed from 9 to 10 then the chart would be correct, its the only value that’s off. So is this just a mistake or is this actually what its suppose to be?

What is meant by the term “concatenation of two q’s denotes a break between two edges in Turing Machine T”? [Universal Turing Machine Encoding Scheme]

I’m studying the topic of universal turing machine encoding and the first line says we can write the turing machine encoding in the form of syllables like

qxcyczMqz

where q’s are representing states and c’s are characters

M denotes either left or right move

I’ve understood what these lines mean but what does qxqz mean? or what does qxqx mean? I’m quite confused there’s no read/write or tapehead movement what does this all stand for?

universal turing machine encoding

i am trying to learn universal turing machines. and i am stuck at encoding tm’s. is there a specific rule that “there can not be two ‘m’s in one sub-rule encoding.”

E.g. δ (q0, a, λ) = (q1, λ) —-> D1010m0110m

is this encoding illegal?

thanks for every one

Halting problem vs Universal Language

Wikipedia defines halting set as follows:

$$H = \{(i, x) |$$ program $$i$$ halts when run on input $$x\}$$

Ullman defines universal language as follows

$$U = \{(M, w) |$$ Turing machine $$M$$ accepts $$w\}$$

The universal TM, $$U$$, is a TM which takes as input an encoded machine/string pair, $$(M,w)$$, and performs the actions of $$M$$ running with input string $$w$$. The most important achievement is to simulate the accepting (i.e., halting) behavior of M. That is we want:

$$M$$ halts on $$w$$ if and only if $$U$$ halts on $$E(M,w)$$
or, in notational terms,
$$M↓w$$ if and only if $$U↓E(M,w)$$

Note that $$E()$$ is encoding function. Also I feel above defines universal language somewhat different than what Ullman defines. While defining universal TM, Ullman says “$$M$$ accepts on $$w$$“, whereas above link says “$$U$$ halts on $$E(M,w)$$“. Its accept vs halt which is I am trying to point out. I feel TM can halt with or without accepting. So I feel the definitions of universal language differs in both sources. Q1. Right?

HALT $$= \{ x ∈ \{c,1\}^*: x = E(M,w)$$ where $$M↓w \}$$
HALT is precisely the language accepted by the Universal Turing Machine, U:
$$M↓w$$ if and only if $$U↓E(M,w)$$

where $$↓$$ seems to be symbol indicating “halts”

But Ullman says:

One often hears of the halting problem for TMs as a problem similar to $$L_y$$ – one that is RE but not recursive. In fact, the original TM of A. M. Turing accepted by halting, not by final state. We could define H(M) for TM M to be the set of the inputs $$w$$ such that < halts given input $$w$$, regardless of whether or not $$M$$ accepts $$w$$. Then, the halting problem is the set of pairs $$(M,w)$$ such that $$w$$ is in $$H(M)$$

By this, I feel Ullman disagrees that halting language is what is accepted by Universal TM.

In particular, the universal TM accepts HALT, but no TM can decide HALT.

If I get it correct, I believe “universal TM accepts HALT” means Universal UTM can simulate HTM (accepting TM and w as input) which checks whether input TM halts on input w. Q2. Am I right with this? Q3. But then that does not mean L(UTM) = L(HTM) as said by the same link in fourth quote. Right?

Q4. Can you summarise halting problem vs universal language? I feel the link is somewhat incorrect and Ullman is correct. I believe:

• Halting language L(HTM) = {(TM,w) | TM halts on w with or without accepting}

• Universal language L(UTM) = {(TM,w) | TM accepts w by halting in final state}

• From above definitions, Halting language is not same as Universal language

• Universal language can simulate Halting language as follows: (HTM,(TM,w)) | HTM accepts (TM,w) by halting in final state when TM halts on w with or without accepting w

Am I correct with above summary understanding?

On the complexity of existential and universal quantifiers

I’m trying to analyze the time complexities of the two former kind of quantifiers, I need help figuring out if I’m following the right path or if I’m making mistakes, here’s what I’ve produced so far:

Let $$D$$ be a random distribution over the natural numbers, Let’s proceed with defining two Turing Machines, $$A$$ and $$E$$ such that $$A$$ implements the universal quantifier and $$E$$ the existential one:

$$A$$ accepts this language $$L_A = {\{D \space | \space\forall \space d \in \space D, \space d ≡ 0 (mod 2) \}}$$ by implementing this function: $$f:D \rightarrow \{{0, 1\}} \space \space \space \space(f(D)= \lnot(d_1\space mod 2)\land \lnot(d_2\space mod 2)\land \lnot(d_3\space mod 2)\land ..\lnot(d_n\space mod 2))$$

$$E$$ accepts this language $$L_E = {\{D \space | \space\exists \space d \in \space D, \space d ≡ 0 (mod 2) \}}$$ by implementing this function: $$f:D \rightarrow \{{0, 1\}} \space \space \space \space(f(D)= \lnot(d_1\space mod 2)\lor\lnot(d_2\space mod 2)\lor \lnot(d_3\space mod 2)\lor ..\lnot(d_n\space mod 2))$$

We want to show that the $$\forall$$ quantifier has a greater lower bound than $$\exists$$.

It’s easy tho Show that Running $$E$$ on input $$D$$ has an upper bound of $$O(|D|)$$ since in the worst case I should review the entire search space (if only the last element of the set $$D$$ is even). The lower bound of the same $$TM$$ is instead constant: $$\Omega(1)$$ (if i get an even number on the first clause that I examine).

As for machine $$A$$, things are a bit different: Here the acceptance of the language $$L_A$$ has an upper bound which corresponds to the lower bound: $$\Theta(|D|)$$, that is because I must check ALL the clauses before being able to accept or reject the language with absolute certainty. Of course there is always the possibility of running a probabilistic algorithm but, also in this case, the “existential” algorithm is much more reliable than the “universal” one.

A possible probabilistic algorithm for recognizing $$L_A$$ could be the following: I begin to verify the clauses, if I see that $$(n-1)$$ clauses are verified, I accept. This is obviously a trivial version (if I arrived at $$(n-1)$$ I might as well get to $$n$$ as I would spend the same time but at least I would be sure to accept or reject the language) and it works in half the cases. If instead I arrive at $$(n-2)$$ and accept, the algorithm correctly accepts the language in a quarter of the cases. In general this algorithm accepts the language with probability $$1/(2^n)$$ where $$n$$ is the number of clause that I have not yet verified. This algorithm is extremely unreliable.

For the language $$L_E$$, instead, the situation is more favorable: I can accept the language directly (without even reading the input) with high probability: $$1-(1/2^n)$$.

What written above, although I think it is true, does not have the value of a proof. I believe, however, that it is of fundamental importance to discriminate the complexity of the two quantifiers, think for example of 3SAT and 3TAUT: the first language is NP-Complete while the second is coNP-Complete and the only difference between the two is the change of quantifier, from existential to universal.

Finally my question comes: Were there theoretical efforts to prove the different complexity of the two quantifiers? Is it even possible to do this with our current knowledge?

What is the best practice of CSRF/CORS for universal apps?

I am making an universal app using react-native and react-native-web. My graphql endpoint is open to public. I know CORS/CSRF do not apply to mobile apps. But I also run the same app on the browser using react-native-web. So I need to prevent other malicious websites to manipulate my users’ requests on their browsers.

I am also using websocket. It seems like csrf token is not used for websocket connections. (Cross Site Request Forgery protection with Django and websockets)

So I need to restrict origins from the server side. Django Channels offers the way to restrict the origin to only my website.(https://channels.readthedocs.io/en/latest/topics/security.html) It is like this:

However, here is the problem. I also have my mobile apps and they do not even have an origin header. (no request header origin when using expo to express server)

How can I restrict origin headers sent from browsers and allow my mobile apps to communicate at the same time?

What is the best practice of CSRF/CORS for universal apps?