Unable to open jupyterlab on Google compute notebook instance

I am unable to access Jupyterlab for a couple of days now using the AI Platform > Notebook instances > Open Jupyterlab. I used the + New Instance option and launched a Tensorflow machine but when I click on the Open Jupyterlab button, it only gives me

504. That’s an error.  That’s all we know. 

I have tried downgrading the notebook version and restarting service as suggested here.

sudo pip3 install notebook==5.7.5 sudo service jupyter restart 

This brings me to the familiar Jupyterlab screen but then does not persist (I mean when I log back in to the instance, Jupyterlab still throws up the same error).

Is this a bug in Jupyterlab? Any solution?

How to force Google Compute Engine ubuntu instance to do fsck?

I have an instance in Google Compute Engine (cloud server) running ubuntu 16.04.

I noticed filesystem corruption on the root filesystem as follows:

==>ls -l data/vocabulary/ ls: cannot access 'data/vocabulary/Makefile': Permission denied ls: cannot access 'data/vocabulary/vocab-count.txt': Permission denied ls: cannot access 'data/vocabulary/vocab-random-access.db': Permission denied ls: cannot access 'data/vocabulary/vocab-list.txt': Permission denied ls: cannot access 'data/vocabulary/vocab.db': Permission denied ls: cannot access 'data/vocabulary/CVS': Permission denied total 0 d????????? ? ? ? ?            ? CVS -????????? ? ? ? ?            ? Makefile -????????? ? ? ? ?            ? vocab-count.txt -????????? ? ? ? ?            ? vocab-list.txt -????????? ? ? ? ?            ? vocab-random-access.db -????????? ? ? ? ?            ? vocab.db 

However, my attempts to “force fsck” and reboot were unsuccessful. I would touch file “/forcefsck” and reboot but fsck just simply would not occur.

I also tried to set the fsck remaining count and that also does not seem to be effective.

What should I do to proceed with fsck?

How to compute follow set for grammars?

How to construct follow set for the following grammar:

S -> E$
E -> TX
T -> (E) | idY
X -> +E | ε
Y -> *E | ε

What I have so far:
first(S) = {(,id}
first(E) = {(,id}
first(T) = {(,id}
first(X) = {+,ε}
first(Y) = {*,ε}

Explanation of how the follow set is derived would be very appreciated. Thanks in advance

How to compute the correlation coefficient?

The question is:

One package of potatoes contains 10 potatoes and weighs exactly 500 grams. Denote by $ X_{1}, \dots, X_{10}$ the weights of each potato.

Are the random variables $ X_{1}, \dots, X_{10}$ independent?

Compute the correlation coefficient of $ \rho(X, Y)$ where $ X=X_{1}$ and $ Y = \sum_{i=2}^{10} X_{i}$

I know this formula $ \rho=\frac{cov(X,Y)}{\sigma_{X} \sigma_{Y}}$ and that $ cov(X,Y)=E[XY] – E[X]E[Y]$

So it seems that it is just to plug in the right values and compute. But Im not sure how to calculate $ E[X]$ and $ E[Y]$ ..

I think it is something along with: I know that $ E[X]=xf(x)$ and here $ x=X_{1}$ and $ f(x) = 1$ soo this equals $ X_{1}$ ? This is true (since this set only contains this potato so therefore we must always get it when we choose). But the answer should be a number, not a random variable…

The same goes for $ E[Y]$ .

I know from the solutions that the answer is: $ \rho(X,Y)=-1$ and thus they are in correlation.

Can hypercomputation compute the impossible?

There are things which are illogical/logically impossible (like saying that 2+2=4 and 2+2=5. Without changing anything in the axioms of mathematics or logic, this would be a contradiction and would be inconsistent and illogical/logically impossible.

There are other types of logic systems apart from classical logic, like paraconsistent logics or even trivialism, that allow these contradictions to occur, prove them as right and work with them.

We can make a paraconsistent or trivialist system and work with it. For example, with trivialism, in theory, we would be able to derive and state everything we would want (since literally everything, even including illogical/logically impossible inconsistencies and contradictions), but we as humans (or as brains), are limited and can’t conceive everything we want (at least to what I know). Therefore, no matter how many trivialist models we create and how much time we would work with them, we would never find or conceive many illogical/logically things because they are just that: impossible. There are impossible things to describe and conceive. For example, Russel’s set is the set of all sets that do not contain themselves. If Russel’s set contains itself, then it cannot contain itself, since it only contains sets that don’t contain themselves. But if Russel’s set does not contain itself, then it must contain itself, since it contains all sets that don’t contain themselves. There are quite a few logic bombs like this. You cannot ever compute the contents of Russel’s set, and there are more formal, mathematical ways to present it. All of them have in common that you can’t actually compute what the set is, whether you do it by hand, in your head, or on a computer. It’s just a statement that cannot be fully logically processed. If you take every possible state the human brain can be in, none of them include the computation of Russel’s Set’s contents. That is, not only can the contents not be computed, they cannot even be represented. No stimulus can cause us to comprehend Russel’s Set, since such comprehension is not possible to begin with. It does not have a solution. Even if we try to solve it using trivialism, we would just be able to write a solution that does not make sense and prove it has sense and it is the real solution, but we could not be able to have a solution that would make sense “outside” the realm of trivialism (for example in classical logic), even though, using trivialism, we could prove that such solution would have sense in whatever context and logic system.

But what about hypercomputational machines (for example oracle-like machines)? I’ve read about some models of hypercomputation which are compatible with paraconsistent or trivialism logics. I’ve also read there are some models of hypercomputation (particularly those oracle-like models which use a black box) where, essentially, the hypercomputer is an algorithm that cannot exist. It might be because such an algorithm is fundamentally forbidden by logic itself (which is hided in a black box). Would any of these be capable of computing/”conceiving” these impossible things I wrote before? Do you know of anything that would help?

I was also thinking that maybe we could evolve enough to get brains that would be capable of computing all of these… So, could our brains ever evolve so much that we could conceive and compute all these illogical/logically impossible things that cannot exist?

Show Lebesgue Integrable and Compute the Two Iterated Integrals

(I am working on problems having to do with Fubini’s Theorem)

Given $ α ∈ (0,∞)$ , show that the function $ (x, y) \mapsto e^{−αxy}\cdot sin x$ is Lebesgue integrable on $ (0,∞) × (1,∞)$ . Compute the two iterated integrals and use the result to compute

$ \int_0^{\infty} e^{\alpha x} \frac{sinx}{x}dx $

How do I show the function is Lebesgue integrable? Usually I need to show that the Lebesgue integral is finite… but I am new to having two variables in these problems.

Now, for evaluating the integral. I have evaluated each of them below, then set them equal, as the iterated integrals should be equal. Is that correct?

dxdy

$ \int_1^{\infty} \int_0^{\infty} e^{-\alpha xy} \cdot sinx dxdy $

$ I = \int_0^{\infty} e^{-\alpha xy} \cdot sinx dx$

Let $ u = e^{-\alpha yx}, du = -\alpha ye^{-\alpha yx}, v = -cosx, dv = sinxdx$ .

$ I = -cosxe^{-\alpha yx}\rvert_0^{\infty} – \alpha y \int_0^{\infty}cosxe^{-\alpha yx}dx $

Let $ u = e^{-\alpha yx}, du = -\alpha ye^{-\alpha yx}, v = sinx, dv = cosxdx$ .

$ I = (0-(-1)(1)) – \alpha y [e^{-\alpha yx}sinx\rvert_0^{\infty} + \alpha y \int_0^{\infty} e^{-\alpha xy} \cdot sinx dx] $

$ I = 1 – \alpha y(0-0) – \alpha^2 y^2 I$

$ I = \frac{1}{1+\alpha^2 y^2}$

Now we have,

$ \int_1^{\infty} \frac{1}{1+\alpha^2 y^2}$

Let $ u = \alpha x, du = \alpha dx$ .

$ = \frac{1}{\alpha} \int_{\alpha}^{\infty} \frac{1}{1+u^2} du = \frac{1}{\alpha} (arctan(\alpha x))\rvert_1^{\infty} = \frac{1}{\alpha} (\frac{\pi}{2} – arctan(\alpha))$

dydx

$ \int_0^{\infty} \int_1^{\infty} e^{-\alpha xy} \cdot sinx dydx $

$ =\int_0^{\infty} [\frac{sinx}{-\alpha x} \cdot e^{-\alpha xy}]\rvert_1^{\infty} dx = \int_0^{\infty} \frac{sinx}{-\alpha x} (0 – e^{-\alpha x}) dx = \frac{1}{\alpha} \int_0^{\infty} e^{-\alpha x} \cdot \frac{sinx}{x} dx$

Then I set them equal to evaluate the integral the problem asks for.

$ \frac{1}{\alpha} (\frac{\pi}{2} – arctan(\alpha)) = \frac{1}{\alpha} \int_0^{\infty} e^{-\alpha x} \cdot \frac{sinx}{x} dx$

$ \implies \frac{\pi}{2} – arctan(\alpha) = \int_0^{\infty} e^{-\alpha x} \cdot \frac{sinx}{x} dx$

My issue is that in the problem is is $ \alpha x$ not $ -\alpha x$ .

How to compute the amplitude of a sampled wave in three dimensional space

I have a sine or cosine wave in a random direction in 3D space and in a random orientation. I want to calculate the amplitude and axis for such a wave. I am sampling the wave at a high rate, compared to the frequency of the wave. I am getting the 6 DOF values for each sample.

I am looking for pointers on how to compute the amplitude and axis of the wave.

Djikstra’s algorithm to compute shortest paths using at least k edges

I have a graph G = (V, E) where each edge is bidirectional with positive weight. I want to find the shortest path from vertex s to vertex t using at least k edges but less than 20. No vertex may be repeated in a path.

I am aware of this problem: Dijkstra's algorithm to compute shortest paths using k edges?

That problem wants to find the shortest paths using at most k edges.

My current idea is to create the product graph via V' = V x {0, 1, 2, ..., 20} and then ignore any shortest paths from (s, 0) to (t, n) where n < k. However, because the product construction essentially duplicates nodes, I am not aware of a way to ignore duplicate vertices in the path-finding algorithm.

P.S. If k were very large, say, k = |V|, wouldn’t this be the Hamiltonian path problem and therefore NP-complete?

Given probability vectors x and y, how can i compute probability vector of z = x + y using linear algebra operations

I have probability vector x (hours to get ‘A’ done), v_x = [1/4, 1/4, 1/4, 1/4]

Another probability vector y (hours to get ‘B’ done), v_y = [1/4, 1/2, 1/4]

How can I get to probability vector Z (hours to get A and B done assuming they can only be done one after the other and independent) using linear algebra operations?