Freelancing Motivation & Technology based Blog | Reasonable Price

Why are you selling this site?
Honestly speaking, I design this site for myself to earn some extra income via google adsense. I posted paid and self-written articles on this site and applied for adsense but the site gets rejected. I am no more interested in doing work on this.

How is it monetized?
It's not monetized but you can monetize it via google Adsense using paid service of any expert. I tried but failed. This is one reason to sell this site.

Does this site come with…

Freelancing Motivation & Technology based Blog | Reasonable Price

Fundamental motivation behind the use of bits and binary representation

This is a naive question, but what makes binary representation special from a theoretical standpoint and from the standpoint of information theory?

If for technical reasons building ternary computers where the information is encoded as trits was easier than building traditional binary computers, I get the feeling that most theoretical computer science and information theory would still use bits and base-2 representation by default.

Even if I have an intuitive feeling of why, I would have liked to know a formal explanation of that: from a purely theoretical standpoint what makes bits and binary representation special compared to any other base?

If the answer is more complex than one may first think, links to books and scientific papers are welcome.

motivation and idea of defining non-deterministic Turing machine

This is a very basic question but I spent some time reading and find no answer. I am not computer science majored but have read some basic algorithm stuff, for example, some basic sorting algorithms and I also have some basic knowledge of how computers operate. However, I am really interested in the idea of a Turing machine, especially the non-deterministic one.

I have read Wiki about the definitions of a Turing machine (and watched some youtube videos) and I sort of accept that, although I really feel that this is a huge jump from an algorithm to an abstract machine. From my understanding (you are more than welcome to correct me):

  1. A Turing machine is a machine performing works specified on a cookery book (algorithm).
  2. The pages of the cookery book represent the “states” of your machine and each page contains a table saying that which state and which cell your machine will move to given the alphabet the machine read and your current state. (NB. This is not a function but a partial function because it is possible that the machine stops.)
  3. So, to guess the idea and motivation of defining an abstract Turing machine, I imagine that the algorithm corresponds to the partial map, the memory of the computer corresponds to the (infinitely long) tape and what’s finally on the tape is the answer to the question you wanna solve.

So, Turing machine looks like a machine to realize any algorithm to solve problems. One just “translates” any algorithm to a set of mysterious simple rules (i.e. the partial function) and let the machine do the laboring job and then we get the solution.

In this respect, Turing machine is always deterministic, because algorithms are deterministic. It tells you what to do next precisely. This is no uncertainty. Turing machine is just a machine to realize any algorithm.


OK, This is very abstract and I sort of accept it. However, then I read something called non-deterministic Turing machine (NTM) and then I was knocked down. A NTM is pretty much similar to a Turing machine except that the partial function is now replaced by a “relation”. That is, it is a one-to-multiple map and it is no longer a (partial) function.

Could someone explain to me why we need such multiple options? I would never expect to encounter something uncertainty in the implementation of an algorithm. It is like telling the machine: you first do A, then if you find yourself in a state B and find the data is now B’ then you choose for yourself one of the 10 allowed next steps?

Are NTM’s corresponding to a set of algorithms that need uncertainty? for example the generation of random numbers? If no, why do we need to allow multiple choices for a Turing machine?

Any help will be appreciated!

Trump VPN Putlocker – Everything You Motivation to Live

Complete the reality, mass use Putlocker to pullulate on-line videos. So let’s delve a lilliputian bitter into this serving and what it offer. We leave also view the better VPN’s for Putlocker today to check your secrecy online.

[​IMG]

What Is Putlocker?

It’s a disengage and sluttish-to-use locate that has made its discover as one of the nigh reliable places to discovery mellow-tone TV shows and movies for…

Trump VPN Putlocker – Everything You Motivation to Live

Motivation for Suslin’s Rigidity Conjecture

Suslin Rigidity conjecture states that motivic cohomology $ $ H_{\mathcal{M}}^1(Spec(F),\mathbb{Q}(n)) $ $ of the field $ F$ coincides with motivic cohomology for the subfield of constants $ F_0$ .

The fact that first motivic cohomology don’t change under pure transcendental extensions gives some evidence this conjecture.

Question: Does there exist a more conceptual reason for validity of this conjecture? Does it tell us something new about algebraic cycles (under assumption that Standard Conjectures hold)?

What’s motivation of the binary Goppa code?

Binary Goppa code is used as an error correcting code that can fix large amounts of errors. And has a fast decoding algorithm.

But I can’t find enough information on the internet to understand what’s the idea behind it and how the generator and check matrixes arise and why they are defined that way.And it seems Goppa’s original paper sits behind paywall.

I learned about linear codes and I have found it’s not trivial to make one.

Can someone explain the general idea and motivations about this binary Goppa code?

I have general understanding of linear codes, linear algebra, some number theory finite fields, but not very much of the more abstract concepts (yet).

Motivation Quotient of Algebraic Variety

Let $ X$ be a variety with a $ G$ -action by an algebraic group on it.

My question refers to a motivating example from

https://web.maths.unsw.edu.au/~danielch/thesis/mbrassil.pdf

Here the relevant excerpt:

enter image description here

Here the author discusses an example of $ X/G$ in order to explaine that it is neccessary to form $ X/G$ as categorical quotient and not the topological one.

We consider following motivating example introduced at page 27:

Here we take $ X:= \mathbb{C}^2$ with action by $ G:=\mathbb{C}^x$ via multiplication $ \lambda \cdot (x,y) \mapsto (\lambda x, \lambda y)$ .

Obviously the “naive” topological quotient consists set theoretically of the lines $ \{(\lambda x, \lambda y) \vert \lambda \in \mathbb{C}^x \}$ and the origin $ \{(0,0)\}$ .

Topologically the origin lies in the closure of every line.

So the QUESTION is why does this argument already imply that $ Y:=X/G$ cannot have a strucure of a variety? I don’t understand the argument given by the author.

If we denote by $ p:\mathbb{C}^2 \to Y$ the canonical projection map and by (continuity?) this map can’t separate orbits, why does this already imply that $ Y$ doesn’t have structure of a variety as stated in the excerpt?

Especially which role does here play the fact that we can’t separate the lines from the origin (in pure topologically way)? Does it cause an obstacle in order to form a variety structure on $ X/G$ ?

Remark: I know that there are different ways to deduce that if we define $ X/G$ pure topologically then it cannot have a structure of a variety. The most common argument is to introduce the invariant ring $ R^G$ and to calculate it explicitely here. But the main issue of this question is it made me curious that the given argumentation seems to be a bit more “elementary” in sense that he doesn’t explicitely work in this example with the concept of the invariant ring $ R^G$ .

Motivation behind the definition of order-$k$ (edge) expansion?

I’m trying to understand the motivation behind the idea of order-$ k$ (edge) expansion for partitions of a graph, defined below:

For simplicity, let’s focus on $ d$ -regular graphs. The definitions I’m working with are:

The edge expansion of a subset of vertices $ S$ is $ $ \phi(S) = \frac{E(S,V \setminus S)}{d \cdot |S|},$ $ where $ E(A,B)$ counts the number of edges with one endpoint in $ A$ and the other in $ B$ .

Let $ S_1, \ldots, S_k$ collection of disjoint vertices, then their order-$ k$ expansion is $ $ \phi_k(S_1, \ldots, S_k) = \max_{i=1,\ldots, k} \phi(S_i).$ $ The order-$ k$ expansion of a graph $ G$ is $ $ \phi_k(G) = \min_{S_1, \ldots, S_k \text{ disjoint} } \phi(S_1, \ldots, S_k).$ $

My question is: why do we consider the $ \max$ in the definition of $ \phi_k(S_1, \ldots, S_k)$ ? If $ S_j$ is the subset of vertices for which $ \phi(S_i)$ is a maximum, this means there are “a lot” of edges from $ S_j$ to $ V \setminus S_j$ , relative to $ d|S_j|$ . Isn’t the $ \min$ more interesting here? Doesn’t the $ \min$ correspond to $ S_k$ that can easily be removed from the graph (few edges need to be cut), and yet the subgraph being removed is relatively dense?