What are the three points of view in Kolmogorov Complexity?

I was reviewing for my finals and find this question that I have totally no clue.

Compare the following to statements from three points of view:

1. There exists a constant $$c > 0$$ such that for all palindromes $$x \in \{0, 1\}^*$$ we have $$K(x) \leq \lfloor x / 2 \rfloor + c$$.

2. There exists a constant $$c > 0$$ such that for all $$x \in \{0, 1\}^*$$ we have $$K(\overline{x}) \leq K(x) + c$$ where $$\overline{x}$$ is the complement of $$x$$.

So what are the three points of view am I suppose to use and where should I start?

Kolmogorov Complexity of Kolmogorov of String Concatenation

For all bit strings x, y and Kolmogorov Complexity K, is

K(xy) > K(x)?

Kolmogorov complexity of $y$ given $x = yz$ with $K(x) \geq \ell(x) – O(1)$

I am trying to solve Exercise 2.2.2 from “An Introduction to Kolmogorov Complexity and Its Applications” (Li & Vitányi, vol. 3). The exercise is as follows (paraphrased):

Let $$x$$ satisfy $$K(x) \geq n – O(1)$$, where $$n = \ell(x)$$ is the length of $$x$$ in a binary encoding. Show that $$K(y) \geq \frac{n}{2} – O(1)$$ for $$x = yz$$ with $$\ell(y) = \ell(z)$$.

I tried but failed to apply the incompressibility method, exploiting the fact that there are many $$x$$ somehow. More direct approaches of trying to come up with clever recursive functions of $$y$$ and $$z$$ also didn’t work out.

Kolmogorov Proof : K(x) >= |x| is undecidable

So I have hard time understanding the contradictory proof for the claim “L={x : K(x) >= |x| }” is undecidable “.

The proof is as follows :

M’ = ” On input n

1. Enumerate over all n-bit strings x in lexographical order
2. Simulate H({ | accepts iff w is in-compressible }) on each x
3. Output first x which H accepts. “

Here TM M’ produced incompressible using only O(log n) to specify n, we can compress incomporessible strings which is a contradiction.

I understood the M’ construction. However, I do not understand where exactly is the contradiction happening? According to me, the M’ outputs x which is in-compressible(ensured by TM M) but how does it is also compressible at same time?

Examples of exact computation of Kolmogorov complexity?

First question: It is known that Kolmogorov Complexity (KC) is not computable (systematically). I would like to know if there are any “real-world” examples-applications where the KC has been computed exactly and not approximately through practical compression algorithms.

Second question and I quote from the “Elements of Information Theory” textbook: “one can say “Print out the first 1,239,875,981,825,931 bits of the square root of e.” Allowing 8 bits per character (ASCII), we see that the above unambiguous 73 symbol program demonstrates that the Kolmogorov complexity of this huge number is no greater than (8)( 73) = 584 bits. The fact that there is a simple algorithm to calculate the square root of e provides the saving in descriptive complexity.” Why take the 584 bits as an upper bound for the KC and not include the size of the actual “simple algorithm” that calculates the square root of e?? It is like cheating…

Range of values for Kolmogorov complexity

Let $$n$$ be a positive integer. Is it true that for all $$1\leq i \leq n$$ there exists a length $$n$$ binary string $$w$$ such that $$K(w) = i$$. Where $$K(w)$$ is the Kolmogorov complexity of $$w$$.

For each $$i$$ there are clearly $$2^i$$ programs of that length. But can we be sure at least one of them (when executed) generate a length $$n$$ binary string?

Kolmogorov randomness for Pseudo random number generator

I am working on pseudo random number generation for one of my projects. My goal is to prove that the output is almost Kolmogorov Random since Kolmogorov complexity is uncomputable. So would appreciate any help or guidance for this subject matter.

Are Kolmogorov equations degenerate elliptic?

Let $$\mu : \mathbb{R}^d \rightarrow \mathbb{R}^d$$ and $$\sigma: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d \times d}$$ be smooth and Lipschitz continuous. Furthermore, let $$\varphi : \mathbb{R}^d \rightarrow \mathbb{R}$$ be continuous and at most polynomially growing.

The equation $$\begin{gather} 0 = \frac{1}{2} \text{Trace}[\sigma(x) \sigma(x) (\text{Hessian}_x (u))(\theta)] + \langle \mu (x) , (\nabla_x u) (\theta) \rangle – \frac{\partial u}{\partial t} (\theta) \ \ := F(\theta, u(\theta) , \nabla u (\theta), \text{Hessian} \ (u) (\theta)), \ \ u(0,x) = \varphi(x), \end{gather}$$ with $$F : \mathbb{R}^{d+1} \times \mathbb{R} \times \mathbb{R}^{d+1} \times \mathbb{R}^{d+1 \times d+1} \rightarrow \mathbb{R}$$ and $$\theta = (t,x_1,…,x_d)$$ is called Kolmogorov partial differential equation.

A function $$H :\mathbb{R}^{d+1} \times \mathbb{R} \times \mathbb{R}^{d+1} \times \mathbb{R}^{d+1 \times d+1} \rightarrow \mathbb{R}$$ is called degenerate elliptic if, for any two symmetric matrices $$A,B$$ for which $$B-A$$ is positive definite, we have $$H(x,s,v, A) \geq H(x,u,v,B)$$ for all $$x, v \in \mathbb{R}^{d+1}$$ and $$s \in \mathbb{R}$$.

Is $$F$$ degenerate elliptic? How could I prove this?

Kolmogorov superposition on the Hilbert Cube

A result of Kolmogorov and Arnold says that continuous functions on $$\mathbb{R}^n$$ can be represented as sums of the form

$$f(x_1,\dots,x_n)=\sum_{q=0}^{2n}\Phi_q\left(\sum_{p=1}^n\phi_{p,q}(x_p)\right),$$

where $$\Phi_p$$ and $$\phi_{p,q}$$ are unary continuous functions.

I’m curious about analogous results for functions on the Hilbert cube.

Suppose I have a continuous function $$f:[0,1]^\omega \rightarrow [0,1]$$. Is it always possible to write this in the form

$$f(x_0,x_1,\dots) = \sum_{q<\omega} \Phi_q\left( \sum_{p<\omega}\phi_{p,q}(x_p) \right) ,$$

where $$\Phi_p$$ and $$\phi_{p,q}$$ are continuous functions and all of the sums converge uniformly? If this fails are there known analogous results?