Generating PDFs with PHP using (PDF)LaTeX [closed]

I am preparing for my students some tests that should be generated by PHP on a webhosting. I have stored latex-source codes in database and I would like to get PDF together using PDFLaTeX. Is there any solution for that? I can not find anything from PHP.

The obvious answer would be to generate the tex-file and then use shel_exec with texlive install on my machine. But since it is on the webhosting where I can not install texlive, I can’t do that in general.

If it is not with PHP, is there any change to use some JavaScript libraries on frontend to generate PDF for download?

Thank you.

Which practices should i use while generating SMS codes for auth on my project?

Let’s imagine that we have an SMS verification auth, and using random 4-digit codes, i.e. 1234, 5925, 1342 and etc.

I’m using this random inclusive algorithm on my node.js server:

function getRandomIntInclusive(min, max) {     min = Math.ceil(min);     max = Math.floor(max);     return Math.floor(Math.random() * (max - min + 1) + min); //The maximum is inclusive and the minimum is inclusive  }  const another_one_code = getRandomIntInclusive(1000, 9999); 

taken from https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/random

Seems i have range from 1000 to 9999 and have some questions about security:

  1. I’m using good algo? Maybe i need to use something better?
  2. Will it increase security if i will check previous sent code for {$ n} last minutes from db and regenerate another one if it will be same (brute same code twice case), so user always gets random 5941-2862-1873-3855-2987 without 1023-1023-2525-2525-3733-3733 case? I understand that chance is low, but anyway…

Thank you for answers!

Is it possible to get the true generating function of a PRNG?

Since every sequence of pseudo-random numbers must be generated by deterministic means, it has to follow some underlying mathematical expression (function-like I guess). Suposse you intend to get this underlying expression in order to predict the output of the PRNG. Even if you could predict the next pseudo-random number that the expression will generate every single time for a billion iterations (say $ n$ ), you could never be sure that the process will not backfire at any given moment, as a consequence of the underlying expression being defined by some piecewise function of the kind:

$ $ \forall x ; g(x)=\delta$ $

$ $ g^{\prime}(x)=\left\{\begin{array}{ll}\delta & \text { if } x<n \ \delta^{\prime} & \text { if } x \geq n\end{array}\right.$ $

Where $ \delta$ and $ \delta’$ are distinct mathematical expressions as a function of $ x$ and n is an arbitrarily large threshold. I have to attempt such a feat (predicting the next random number that a PRNG will output) with machine learning tools, and this observation, although perhaps of triviality, may be of importance, at least to clear out that I will not be able to find any definite solutions to the task, only partial and working solutions.

My issue is that I lack solid or even basic knowledge of the fundamentals of mathematical proving, and I am not even sure if the above counts as a rigurous proof, or if there is a way to formally express the thought. My inquiry would be to know if I am mistaken in my assessment and, otherwise, to obtain a formal proof to include this in my work in a respectable manner. Any thoughts and remarks are welcomed.

Marginal Probability of Generating a Tree

Fix some finite graph $ G = (V, E)$ , and some vertex $ x$ .

Suppose I generate a random sub-tree of $ G$ of size $ N$ , containing $ x$ , as follows:

  1. Let $ T_0 = \{ x \}$ .
  2. For $ 0 < n \leqslant N$

    i. Let $ B_n$ be the set of $ y \in V$ such that $ y \notin V(T_{n-1})$ , and such that $ (x, y) \in E \text{ for exactly one } x \in V(T_{n-1})$ .

    ii. Form $ T_n$ by

    • Selecting some $ y_n \in B_n$ with probability $ q_n (y_n | T_{n-1} )$ ,
    • Adding it to $ T_{n-1}$ , and
    • Adding the edge between $ y$ and its neighbour in $ V (T_{n-1})$ .
  3. Return $ T_N$ .

Suppose also that $ q_n (y_n | T_{n-1} )$ can be computed easily for all $ (T_{n-1}, y_n)$ . I am interested in efficiently calculating the marginal probability of generating the tree $ T_N$ , given that I began growing it at $ T_0 = \{ x \}$ , i.e.

$ $ P(T_N | T_0 = \{ x \}) = \sum_{y_1, \cdots, y_N} \prod_{n = 1}^N q_n (y_n | T_{n-1} ).$ $

My question is essentially whether I should expect to be able to find an efficient (i.e. polynomial-time) algorithm for this, and if so, what it might be.

Some thoughts:

  • Naively, the sum has exponentially-many terms, which precludes trying to evaluate the sum directly.

  • On the other hand, this problem is also highly-structured (trees, recursion, etc.), which might suggest that some sort of dynamic programming approach would be feasible. I’m not sure of exactly how to approach this.

  • Relatedly, I know how to calculate unbiased, non-negative estimators of $ P(T_N | T_0 = \{ x \})$ , which have reasonable variance properties, by using techniques from Sequential Monte Carlo / particle filtering. This suggests that the problem is at least possible to approximate well in a reasonable amount of time.

Generating QR-Code for 2fa from google maps api risky?

I am currently using 2 factor authentification to tighten security for my login system. I use Google Authenticator to scan a QR Code, which generates a key which I can use to login.

What worries me with my implementation is the way I create my QR Code in php using this API:

'https://chart.googleapis.com/chart?chs='.$  width.'x'.$  height.'&chld='.$  level.'|0&cht=qr&chl='.$  url_containing_secret.'' 

Using the maps API seems a bit unsafe since im basically sharing my secret through http. Isnt this actually risky? Im seriously considerung create the qrcode using some library instead of an external api.

Am I too paranoid or?

Generating trusted random numbers for a group?

Alice and Bob need to share some cryptographically-secure random numbers. Alice does not trust Bob, and Bob does not trust Alice. Clearly, if Alice generates some numbers, and hands them to Bob, Bob is skeptical that these numbers are, in fact, random, and suspects that Alice has instead generated numbers that are convenient for her.

One naive method might be for each of them to generate a random number, and to combine those numbers in some way (e.g. xor). Since they must be shared, and someone has to tell what theirs is first, we might add a hashing scheme wherein:

1) Alice and Bob each generate a random number, hash it, and send it the hash to the other (to allow for verification later, without disclosing the original number). 2) When both parties have received the hash, they then share the original number, verify it, xor their two numbers, and confirm the result of the xor with each other.

However, this has a number of problems (which I’m not sure can be fixed by any algorithm). Firstly, even if Alice’s numbers are random, if Bob’s are not, it is not clear that the resulting xor will then be random. Secondly, I’m not certain that the hashing scheme described above actually solves the “you tell first” problem.

Is this a viable solution to the “sharing random numbers in non-trust comms” problem? Are there any known solutions to this problem that might work better (faster, more secure, more random, etc)?

What is the running time of generating all $k$ combinations of $n$ items $\binom{n}{k}$?

I was solving the following problem, just for reference (441 – Lotto). It basically requires the generation all $ k$ -combinations of $ n$ items

void backtrack(std::vector<int>& a,                int index,                std::vector<bool>& sel,                int selections) {     if (selections == 6) { // k is always 6 for 441 - lotto         //print combination         return;     }     if (index >= a.size()) { return; } // no more elements to choose from     // two choices     // (1) select a[index]     sel[index] = true;     backtrack(a, index+1, sel, selections+1);      // (2) don't select a[index]     sel[index] = false;     backtrack(a, index+1, sel, selections);  } 

I wanted to analyze my own code. I know at the top level (level = 0), I’m making one call. At the next level (level=1) of recursion, I have two calls to backtrack. At the following level, I have $ 2^2$ calls. The last level would have $ 2^n$ subproblems. For each call, we make $ O(1)$ work of selecting or not selecting the element. So the total time would be $ 1+2+2^2+2^3+…+2^n = 2^{n+1} – 1 = O(2^{n})$

I was thinking since we’re generating $ \binom{n}{k}$ combinations that there might be a better algorithm with a better running time since $ \binom{n}{k}=O(n^2)$ or maybe my algorithm is wasteful and there is a better way? or my analysis in fact is not correct? Which one is it?

Strange data size when generating a very large word list with Crunch

While trying out the wordlist generator crunch in Kali Linux 2020.1 I came across the following behaviour:

root@kali:/home/kali# crunch 10 10 \ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890,.-:! -o chars.txt Crunch will now generate the following amount of data: 1604471776359824323 bytes 1530143524513 MB 1494280785 GB 1459258 TB 1425 PB Crunch will now generate the following number of lines: 1822837804551761449  root@kali:/home/kali# crunch 10 10 \ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890,.-: -o chars.txt Crunch will now generate the following amount of data: 17251705690018753536 bytes 16452508630770 MB 16066902959 GB 15690334 TB 15322 PB Crunch will now generate the following number of lines: 1568336880910795776 

How come removing the exclamation mark blows up the calculated wordlist size to 15322PB in contrast to being 1425PB if it were to be included?

For me this very much looks like a bug in the code.

Difference between regular grammar and CFG in generating computation histories and $\Sigma^*$

I would like to ask for intuition behind the difference between the way a CFG generates $ \Sigma^*$ and the way a regular grammar generates $ \Sigma^*$ .. I got the examples here from Sipser. Let $ ALL_{CFG}$ refer to the language that a CFG generates $ \Sigma^*$ , and let $ ALL_{REX}$ refer to the regular expression equivalent to a regular grammar which generates $ \Sigma^*$ .

From what I got, we have:

  • $ ALL_{CFG}$ is not decidable, it is also not Turing-recognizable. Let $ \overline{A_{TM}}$ refer to the language that a TM $ M$ does not accept input word $ w$ . We can reduce $ \overline{A_{TM}}$ to $ ALL_{CFG}$ in polynomial time using computation histories. The reduction constructs a CFG which generates all possible words where: 1) the first characters do not match $ w$ , 2) the last characters do not match accepting configurations, and 3) characters do not match valid transitions of $ M$ ‘s configurations. Since the reduction maps $ \overline{A_{TM}}$ to $ ALL_{CFG}$ , and $ \overline{A_{TM}}$ is not Turing-recognizable, $ ALL_{CFG}$ is not Turing-recognizable.

  • $ ALL_{REX}$ is decidable since it is decidable if a finite automaton accepts $ \Sigma^*$ . However, any regular language $ R$ can be mapped to the language $ ALL_{REX} \cap f(R_M)$ , where $ R_M$ is a TM that decides $ R$ , and $ f(R_M)$ is a similar reduction of computation histories as outlined above. In more detail, $ f(R_M)$ is the regular grammar that generates all possible words where 1) the first characters do not match $ w$ , 2) the last characters do not match rejecting configurations, and 3) characters do not match valid transitions of $ R_M$ ‘s configurations. The decider for $ ALL_{REX} \cap f(R_M)$ checks if $ f(R_M)$ is equal to $ \Sigma^*$ .

So, I would like to ask:

From above, both regular grammars and CFG could generate computation histories of a TM. But what is it with the CFG’s grammar structure that makes it valid to reduce $ \overline{A_{TM}}$ to $ ALL_{CFG}$ , but it is not possible for $ \overline{A_{TM}}$ to be reduced to $ ALL_{REX} \cap f(A_{TM})$ ? I know that we cannot reduce $ \overline{A_{TM}}$ to $ ALL_{REX} \cap f(A_{TM})$ since $ ALL_{REX} \cap f(A_{TM})$ is decidable, while $ \overline{A_{TM}}$ is not Turing-recognizable… But I would like to ask in terms of the difference in grammar rules between CFG’s and regular grammars.

How to find Exponential Generating Function (EGF) for the class of rooted labelled trees?

Does anyone know how to solve this problem? problem: Let C be the class of rooted labelled trees such that the labels along any path stemming from the root form an increasing sequence. Use the boxed product to construct C, and then use it to find the EGF of C. Unfortunately, I don’t know how to start solving this problem. So, I have no ideas.