What is the relation between parsing languages and checking languages?

I have looked at a number of textbooks on computability theory. They typically have the following form:

  • Define a language class (regular, context-free, context-sensitive, recursively enumerable)

  • Define an automaton that recognizes the class (finite automaton, pushdown automaton, linear-bounded automaton, turing machine)

However, another fundamental question is how to parse a language. I have not found an treatment of parsing as a computational problem in these textbooks.

Is there a simple relation between an automaton that recognizes a language, and an automaton that parses the language (and outputs a parse tree)?

Sqrt(x) function implementation; what is $(i^2 \leq x) \land ((i + 1)^2 > x)$ checking?

Recently I was working on a LeetCode question

Compute and return the square root of x, where x is guaranteed to be a non-negative integer.  Since the return type is an integer, the decimal digits are truncated and only  the integer part of the result is returned.  Example 1:  Input: 4 Output: 2 Example 2:  Input: 8 Output: 2 Explanation: The square root of 8 is 2.82842..., and since               the decimal part is truncated, 2 is returned. 

Here is a solution that I really like and understand, except for one line. It is using binary search to get to the solution:

int mySqrt(int x) {     if (x == 0) return x;     int left = 1, right = INT_MAX, mid;     while(left < right) {         mid = left + (right - left)/2;         if (mid <= x / mid && (mid + 1) > x / (mid + 1)) {             return mid;          }         if (mid < x/mid) {             left = mid + 1;         } else { // mid > x/mid             right = mid;         }     }     return right; } 

The conceptual question is: why is it true that given a particular number, say $ i$ , $ $ (i^2 \leq x) \land ((i + 1)^2 > x)$ $ returns whether or not $ i$ is the truncated integer representation of the square root of $ x$ ? (The code block above returns on the identical condition, but inequality is rearranged to avoid integer overflow)

Certificate checking with OCSP and root CA not trusted

I have a root certificate authority, an sub CA and an server.

The root ca delivers a certificate for the sub CA.

The sub CA delivers a certificate for the server.

I want to check all the certification chain, from the server to the root CA with OCSP.

The command

openssl ocsp -issuer chain.pem -cert server.pem -CAfile root_ca.crt -text -url http://ipa-ca.sub.berettadomaine.fr/ca/ocsp 

gives the result:

Response Verify Failure  140376105273232:error:27069070:OCSP routines:OCSP_basic_verify:root ca not trusted:ocsp_vfy.c:166:  server.pem: good 

chain.pem contains all the certificates (root ca, sub ca and server).

So, root ca is not trusted but, however, I put the option “-CAfile root_ca.crt…

Do you know where is my fault, how can I check all the chain please?

Thank you very much.

Proof by contradiction – Only checking a right-neighbor in a sequence of pairwise distinct integers is sufficient to identify the first local maximum

I’m trying to figure out if my proof is valid, I think it makes intuitive sense but am worried I’m missing something. Any help would be much appreciated!

Question

A peak element is an element that is greater than its neighbors.  Given an input array nums, where nums[i] ≠ nums[i+1], find a peak element and return its index.  The array may contain multiple peaks, in that case return the index to any one of the peaks is fine.  You may imagine that nums[-1] = nums[n] = -∞.  Example 1:  Input: nums = [1,2,3,1] Output: 2 Explanation: 3 is a peak element and your function should return the index number 2.  Example 2:  Input: nums = [1,2,1,3,5,6,4] Output: 1 or 5  Explanation: Your function can return either index number 1 where the peak element is 2, or index number 5 where the peak element is 6. 

Solution

public int findPeakElement(int[] nums) {     for (int i = 0; i < nums.length-1; i++) {         if (nums[i] > nums[i + 1]) {             return i;         }     }     return nums.length - 1; } 

Why don’t you have to check the left neighbor at each element?

Assume towards a contradiction that we are iterating through nums, yet to discover a peak, and we come across and element at index i whose right-neighbor at index i+1 is strictly smaller. If the element at index i were not a peak then the element at index i-1 would have to be strictly larger. Then we have that

nums[i-1] > nums[i] > nums[i+1] 

This then implies that nums[i+1] is the last element in a strictly decreasing sequence of elements (that we’ve seen), the start of which must be a peak (either the sequence starts at index 0 or it starts at index k, 0 < k < i). This contradicts our assumption, therefore the first element whose right-neighbor is strictly smaller is a local peak.

Checking disjointness between subsets of a poset

If there is a poset $ (P, \le)$ and two sets $ X \subseteq P$ and $ Y \subseteq P$ , and we have a way $ f : P^2 \to 2$ to efficiently compute for any $ (x, y) \in P^2$ whether there exists a $ z \in P$ such that $ (x \le z) \wedge (y \le z)$ , we want to return $ \mathbf{T}$ if there exists a pair $ (x, y) \in X \times Y$ such that $ f(x, y) = 1$ and $ \mathbf{F}$ otherwise, using the fewest possible number of calls to $ f$ .

Password checking resistant to GPU attacks and leaked password files without introducing a DoS attack on the server?

In very old time the passwords were stored in clear text. This made it trivial for an attacker to log in if he had access to a leaked password file.

Later, passwords were hashed once and the hashed value stored. If the attacker had a leaked password file he could try hashing guesses and if a hash value matched, use that guess to login.

Then passwords were salted and hashed thousands of times on the server and the salt and the resulting hash value was stored. If the attacker had a leaked password file he could use specialized ASICs to hash guesses and if a guess matched use that password to login.

Can we do better than that?

Can we make password guessing of an attacker so hard that even if he has the hashed password file, he will not get a major advantage over testing the passwords against the server – even if he has specialized ASICs?

Why did browsers choose to implement HSTS with Preload over checking custom DNS information?

Browsers and standards bodies favor HSTS with Preload because it avoids ever sending an http request to a website that supports https. This is good, because cleartext http requests can be intercepted to set up Man in The Middle attacks.

But a number of websites explain that a centralized Preload list doesn’t scale up well to the mostly https web that has been proposed by W3C, the EFF, and others. Managing one centralized list creates a bottleneck for looking up, adding, and deleting list items.

Yet this technology has been implemented rather than, say, to use DNS, which is already nicely distributed and is already used by browsers to lookup URL domain names.

Of course, the DNS is not yet secure, and proposals to make it secure are controversial. But why would the DNS have to be secure to hold one more bit of information (whether the domain can support https–and ONLY https–or not)?

In the worst case, a malicious MiTM attack could make it seem that a website is insecure when it is actually secure. But in this case, an insecure connection would simply fail. This failure would deny the malicious user any advantage.

So naturally I’m wondering why a centralized HSTS with Preload is preferred over adding a new flag to DNS zones for indicating that the domain supports https connections.