Can we construct an explicit Turing Machine with a Halt1 oracle that decides if a standard Turing Machine halts on all inputs?
By a Halt1 oracle I mean that we have the ability to decide if Turing Machines with a Halting oracle halts or not.
100% Private Proxies – Fast, Anonymous, Quality, Unlimited USA Private Proxy!
Get your private proxies now!
Can we construct an explicit Turing Machine with a Halt1 oracle that decides if a standard Turing Machine halts on all inputs?
By a Halt1 oracle I mean that we have the ability to decide if Turing Machines with a Halting oracle halts or not.
I am working on an algorithm that has multiple fixed parameters. The algorithm analyzes time series data and spits out a number. The fixed parameters need to be such that this number is as small as possible.
What I found, is that when optimizing the parameters for a specific time period, these parameters don’t necessarily work well when used on another time period.
The way I see it, is that there are two possible solutions to this problem:
Option 1. would be incredibly expensive in terms of computational time. And although it makes intuitive sense that this should fix the problem, I am not sure that this would indeed be the case.
Option 2. reminds me of training neural networks, where one would feed in a large number of “data points” and somehow take a (weighted) average of the results to find a set of parameters that work well for all data points. Unfortunately, I know very little to nothing about the algorithms used for this kind of optimization/learning.
Any help or suggestions are greatly appreciated. Please let me know if there is anything you’d like me to expand upon.
Thanks!
This is my first question on a stack exchange website so please bear with me. I am making challenges for a jeopardy style capture the flag event in my college and I had come across the minetest challenge in the hardware section of google CTF qualifier conducted last year. A clean and organized solution to this problem has been provided by liveoverflow.
I would like to design a simpler version of this problem for my college’s CTF event but I am unable to design a complex circuit that gives true output only for a specific combination of inputs. I know that a circuit with this functionality is not very difficult to implement and just needs to represent the following logic:
trueinput1 AND trueinput2 AND ... NOT falseinput1 AND NOT falseinput2 ...
However I want it to be vast and complicated so that participants cannot decode its functionality just by doing a visual analysis. Is there any technique to complicate the boolean logic above and to design a corresponding circuit that looks ugly even for a small number of inputs(32/64).
With n input variables, we can now obtain all 2^n diﬀerent classiﬁcation functions needed for each possible set of missing inputs, but the computer program needs to learn only a single function describing the joint probability distribution.
This is page 98 of Ian Goodfellow’s Deep Learning Book. My confusion comes from how joint probability distributions are used to solve the problem of missing inputs. What are the random variables in this scenario? I don’t really understand the connection here so if someone could please elaborate that would be great.
I read that a reasonable encoding of inputs is one where the length of the encoding is no more than a polynomial of the ‘natural representation’ of the input. For instance, binary encodings are reasonable, but unary encodings are not.
But say that the input is a graph, and its natural representation is a vertex and edge list. Suppose that the graph has $ k$ vertices. If I use unary to encode, the overall length of the input referring to the vertex list would be $ O(k^2)$ , i.e. $ =|1^1|+|1^2|+|1^3|+…+|1^k|$ . Isn’t this unary encoding still a polynomial with respect to the number of vertices of the graph (which is $ k$ )?
What am I missing here?
So I have set truth table:
| x y z t | f ------------- | 0 0 0 0 | 1 | 0 0 0 1 | 1 | 0 0 1 0 | 0 | 0 0 1 1 | 1 ------------- | 0 1 0 0 | 0 | 0 1 0 1 | 0 | 0 1 1 0 | 1 | 0 1 1 1 | 1 ------------- | 1 0 0 0 | 1 | 1 0 0 1 | 0 | 1 0 1 0 | 1 | 1 0 1 1 | 1 ------------- | 1 1 0 0 | 1 | 1 1 0 1 | 1 | 1 1 1 0 | 1 | 1 1 1 1 | 1
the function is random.
Then I have done the state indexes (just for better orientation).
Σ(x,y,z,t) = (0,1,3,6,7,8,10,11,12,13,14,15) Π(x,y,z,t) = (2,4,5,9)
Then I used Karnaugh maps
00 01 10 11 ----------------- 00 | 1 | 1 | 1 | 0 | ----------------- 01 | 0 | 0 | 1 | 1 | ----------------- 10 | 1 | 1 | 1 | 1 | ----------------- 11 | 1 | 0 | 1 | 1 | -----------------
And got minimalized function
___ _ _ __ f(x,y,z,t) = xzt + xy + xz + zt + xyz
And now I have to draw the schematic of this logical function using only INV, AND, OR gates. After this, I have to draw the schematic using only NAND gates.
I think, that the next step is to use the De Morghan laws and laws of double negation, but im not 100% sure.
Can someone please help me do the circuit realization (only on paper)? Our teacher didn´t had time to explain it to us how to do it, because of the COVID-19 and now we have to learn it at our own. I would appreciate any help.
notation: $ x+y:=\mbox{OR}(x,y)$ , $ \bar x:=\mbox{NOT}(x)$ , $ xy:=\mbox{AND}(x,y)$ , 1:=TRUE, 0:=FALSE.
Let $ f$ be a Boolean function of $ n$ -variables, i.e. $ f: \{0,1\}^n \to \{0,1\}$ .
minterm:= any product (AND) of $ n$ literals (complemented or uncomplemented). e.g, $ x_1 \bar x_2 x_3 $ is a minterm in 3 variables
$ \mbox{NOR2}(f)$ is the minimum number of 2-input NOR gates required to represent a given function $ f$ . For instance, $ \mbox{NOR2}(x_1 x_2)=3$ .
Let $ f_1= m_1, f_2=m_2$ , where $ m_1, m_2$ are minterms that are co-prime (i.e, $ f_1+f_2$ can’t be minimized further. In other words, $ m_1,m_2$ are prime implicants of $ f_1+f_2$ ). For instance, $ x_1 \bar x_2 x_3 $ and $ x_1 x_2 \bar x_3 $ are co-prime
Then, is the following true? $ $ \mbox{NOR2}(f_1+f_2)\ge \mbox{max}\{ \mbox{NOR2}(f_1), \mbox{NOR2}(f_2) \}$ $
[i.e, adding two coprime minterms can’t yield a 2-input NOR circuit with fewer gates]
I think it is true but I can’t think of a proof. Any ideas on how to start proving it?
This might be a very easy question. Let’s consider cryptograhic hash functions with the usual properties, weak and strong collision resistance and preimage resistance.
For any given output, obviously there are multiple inputs. But is that necessarily an infinite number of preimages, for any given hash value?
How would I go about giving a formal proof that there exists no crypto hash function h() such that there is a given value v = h(m*) for which the possible set of inputs m* is finite? Would this necessarily break collision resistance?
As far as of my knowledge goes, keyboard don’t store keystrokes in their memory by default (excluding those bundled with keyloggers). The thing that comes to my mind though is that some keyboards do have some built-in memory for storing user’s preferences (e.g. gaming keyboards). Can this be somehow reprogrammed to store other data than just LEDs color combo?
Can I sell my keyboard without worrying that new owner might recover previous input in some way?
Cheers, Dominic
Can this method of encryption prevent bruteforce attacks?
If I had a hypothetical table (or function) where every grammatically valid sentence (in existence, limited to some number of words) was given an associated number, e.g:
"Good morning, how are you." = 3283 "Today is a nice day." = 2183
Then added a number (as a key), e.g:
3283 + 1234 = 4516
Wouldn’t this final output of 4516
be effectively protected against bruteforce attacks?
Ignoring the difficulty of producing a hashtable/function capable of reducing every valid input into a single number, and the issue of sending the key 1234
securely.
Is there any way of finding the original input only from the output?
Is limiting the domain of the encryption to only valid inputs, an effective method of preventing bruteforce attacks?
If so is there any practical example of this? Why or why not?