What exactly happens if I use a library with a GPL license in a commercial product?

I’m starting a project that involves OCR reading and I was planning to use pytesseract which uses the GPLv3 license. I tried to understand what exactly the GPL license means when it says “modify” but I’m still not sure. If I import and use the library do I then need to release my code under the GPL license?

Also, what does the GPL license mean when it says distribute, if I write a piece of software and charge companies for me to examine their data with it, am I distributing it?

Thank you so much for any help.

Versioning: Where exactly in the code or repository should I write the version number?

Right now I’m doing this:

  1. Update the readme to include the new version number and what it does.
  2. Commit the version number in git commit message (i.e git commit -m “1.2.1: this does that”).
  3. Push the repository.

Is this the proper way of doing it? I had a hunch that this is wrong, because this way I have no idea how to version the branches. Previously I’m almost always working directly in Master, because more often than not I’m working on a repository alone. I want to change this.

Edit: specifically I’m working on web apps where I have no compiled binaries.

Show that for each even n, there exists a graph with n vertices, such that the 2-approx VC alg returns a VC which is exactly twice the Minimum-VC

Question: Show that for each even n, there exists a graph with n vertices, such that the ALG(algorithm) returns a vertex cover which is exactly twice the size of minimum vertex cover.

Define ALG: simple 2-approximation deterministic algorithm for the vertex cover problem in an unweighted graph. Output of ALG: vertex cover which is at most twice the size of minimum vertex cover.

How will we show that?

My struggle: I tried to solve this in the following way(below), BUT I’m not sure that my proof is correct, meaning that, will there always exist a bipartite graph when n is even?

How I will approach the proof: We’ll show that maximum matching in a graph is equal to the size of minimum vertex cover, and then we’ll show that the output of ALG is exactly twice the size of maximum matching, and thus is twice the size of minimum vertex cover. And thus, there always exists a bi-partite graph which is divided to two distinct sets of size n/2 which contains a minimum vertex cover that is exactly half the vertex cover.

My proof:

Step 1: There always exists a graph, which is bipartite, that maximum matching in it is equal to the size of minimum vertex cover, in that graph. Proof: Kőnig’s theorem states that, in any bipartite graph, the number of edges in a maximum matching is equal to the number of vertices in a minimum vertex cover.

Step 2: A matching in graph G is a set of edges such that no two edges share a common vertex. A maximal matching M is a matching such that we cannot add any of the remaining edge into M while maintaining the matching constraint, in other word, if we add any of the edge then M will not be a valid matching anymore. Note that maximal matching is different with maximum matching. Maximum matching deals with the largest possible matching set. So, maximum matching is a maximal matching, but the converse is not necessarily true (see figure below for clarity). enter image description here

Figure 4 corresponds to a maximum matching (we cannot do better than 3), but both Figure 4 and Figure 5 are valid maximal matchings (we cannot add any of the remaining edge to the set while not violating the matching condition).

We can conclude that is it’s maximum matching is ALWAYS maximal matching. (vice versa not always true).

Define C: the output of ALG – which is the vertex cover set of the “2-approx for minimum vertex cover problem”.

Let’s add also the ALG algorithm for more clarity:

1: C = Ø 2: while there is uncovered edge (u,v) 3:   add vertex u into C 4:   add vertex v into C 5: return C 

That’s it! For all uncovered edges (in any order), we only need to add both endpoints into the cover set (line 3 and 4). In other words,

Define: If a graph has a minimum vertex set of size C*, then the algorithm will always provide a solution of cost C such that C <= 2C*.


  • Claim: The approximation ratio of the algorithm is 2.

  • Let S* be a vertex cover of optimal size |S*| = C*.

  • Let E# be the set of edges picked by the algorithm.

  • Every edge in E# has at least one vertex in S*.

  • Every edge in E# has 2 vertices in S.

  • There are no other vertices in S.

  • Thus, the size of S is at most twice the size of S*.

  • C/C* <= 2

Step 3:

Lemma: The proposed algorithm ALG produce a maximal matching. Every pair added into C in the algorithm is unique, it will not add edge (u,v) and (u,w) because when it encounters edge (u,v), it will add both vertex u and v, hence edge (u,w) will be covered by vertex u. Therefore, the algorithm will produce a matching in G, and it is maximal because it iterates until there is no uncovered edges.

Define: M – maximal matching.

Step 4:

From the algorithm and Lemma, we can conclude that the algorithm produce a vertex cover with size 2|M| (for each matching, we pick both vertices). Thus,because of step 4 and step 1, the algorithm produce a vertex cover with size 2 * number of vertices in a minimum vertex cover. And because n is even, the graph that will exist is bipartite with distinct sets of size n/2.


3SAT instance with EXACTLY 3 instances of each literal

I’m trying to solve a question which requires me to:

  1. prove that an instance of 3SAT where each literal appears in exactly 3 clauses (positive and negative appearances combined).

  2. Find a polynomial time algorithm to find a satisfying assignment for it.

My Solution

I’m not sure how to prove part 1. I’m trying to solve 2 by reducing it to an instance of Vertex Cover in which each literal has 2 nodes – one positive, one negative – and each node is connected to the other literals its in a clause with. A vertex cover of size m = # of literals will give us the assignment needed.

Im not sure of I’m along the right path or not? Any help would be appreciated!

What exactly does iOS jailbreak mean? Does it give you root privileges to the device?

Everywhere on the internet, jailbreak is described as equivalent to rooting on an Android, but is it true? For example, Android is based on linux kernel, so rooting means to flash the su binary hence giving you the “sudo” or “root” privileges as in linux Ubuntu. What is the equivalent process involved in an iOS jailbreak? Does it give a user the ability to run a terminal with sudo privileges and complete control over the device?

As far as I understand, iOS is based on BSD kernel which implements the jails by making use of chroot syscall and jailbreak means to remove this protection by using an exploitation. But is this equivalent to root privileges or is root locked even after you jailbreak your device?

I would be very very grateful if someone can clear up on this. No article or book mentions this in the detail that I require.

Why are exactly 4 bit windows used in the lookup table of libsecp256k1 to speed up point multiplications?

From the Readme of secp256k1 we can see the following:

Use a precomputed table of multiples of powers of 16 multiplied with the generator, so general multiplication becomes a series of additions.

I was wondering why in particular the table used precomputed table of multiples of 16? I would have expected a higher number or a more dynamic approach which includes dynamic caching.

Let me elaborate a little bit:

With multiples of 16 we always need 4 bit computed in the table. meaning we have 256 / 4 = 64 buckets with 16 entries for each bucket.

Let n be the number of bits in a window for which we compute powers of g this would result in the general formula for the amount of precomputed values in our table for n > 1:

256 / n * 2 ^ n

with n = 4 we have 64 * 16 = 1024 entries.

When choosing n = 8 we would have 32 * 256 = 8192 entries. However when actually computing a multiplication we would only need 32 additions instead of 64. creating a speedup of a factor of 2 for 8 times us much memory usage of our lookup table.

With n = 16 we would have 16 * 65536 = 1048576 or 1M * sizeof(point) of main memory to have only 16 point additions when computing a multiplication.

Obviously such a big lookup table requires some time when setting up the library. Even if the table was already precomputed and in binary shipped with the library.

Anyway I was wondering for the particular choice of 4 bits. I would assume that 8 bits was better and probably even taking 16 bit windows seems fairly reasonable.

Ubuntu 18.04 disconnects from Win10 RDP after exactly 30 min

Dell Precision 7520 Ubuntu 18.04 laptop with xrdp installed. Self built Windows 10 makes RDP Remote Desktop Protocol connection to Ubuntu machine on local network. Exactly at 30 minutes connection is dropped, Ubuntu goes in Suspend mode. Hit keyboard on Ubuntu machine and no problem reconnecting. On Ubuntu 18.04 Settings->Power->Suspend & Power Button Automatic Suspend is set to 45 minutes on Battery Power. When connected through Win10 RDP the Settings->Power->Suspend & Power Button in not visible. Any ideas on how to override automatic suspend setting when connecting from Win10 RDP