Is there such a notion as “effectively computable reductions” or would this be not useful

Most reductions for NP-hardness proofs I encountered are effective in the sense that given an instance of our hard problem, they give a polynomial time algorithm for our problem under question by using reductions. All reductions in the 21 classic problems considered by R. Karp work this way. A very simple example might be reduction from INDEPENDENT_SET to CLIQUE, just built the complement graph of your input.

But when considering the proof of the famous Cook-Levin Theorem that SAT is NP-complete, the reduction starts from a non-deterministic TM and some polynomial, that exists by definition. But to me it is not clear how to get this polynomial effectively, meaning given an non-deterministic TM from which I know it runs in polynomial time, it is not clear how to compute this polynomial, and I highly suspect it is not computable at all.

So soley from the encoding of NP-complete problems by the class of non-deterministic polynomial time TMs (the polynomial itself is not encoded as far as I know) I see no way how to give a reduction in an effective way, the aforementioned proof just shows that there exists some, but not how to get it.

Maybe I have understood something wrong, but I have the impression that usually the reductions given are stronger in the sense that they are indeed computable, i.e. given a problem we can compute our reduction, and not merely know its existence.

Has this ever been noted? And if so, is there such a notion as “effectively computable reduction”, or would it be impractical to restrict reductions to be itself computable? For me, from a more practical perspective, and also the way I sometimes see reductions introduced (“like we have an algorithm to convert one instance to another”) it would be highly desirable to know how to get this algorithm/reduction, so actually it seems more natural to demand it, but why is it not done?

A notion dual to a product type having a given type

Consider this class:

class Has record part where   extract :: record -> part   update :: (part -> part) -> record -> record 

It captures the notion of some product type record having a field of the type part which can be extracted from the record, or the functions on which can be used to update the whole record (in a lens-ish manner).

What happens if we turn the arrows? Following the types and noting that a sum type is dual to a product type, and a “factor” in a product type is analogous to an option in a sum type, we get

class CoHas sum option where   coextract :: option -> sum   coupdate :: (sum -> sum) -> option -> option 

Firstly, is this line of reasoning correct at all?

If it is, what is the meaning of coextract and coupdate? Obviously, coextract produces the sum out of one of its options, so it might as well be called inject or something similar.

coupdate is more interesting. My intuition is that, given a function f that updates a sum type, it gives us a function that can be used to update one of its options. But, obviously, not every f is fit for this! Consider

badF :: Either Int Char -> Either Int Char badF (Left n) = n badF (Right _) = Left 0 

then coupdate badF does not make sense where coupdate is taken from CoHas (Either Int Char) Char. One requirement seems to be that the function passed to coupdate must not change the tags of the sum type.

So here’s the second question: what’s the dual of this requirement in the Has/update case?

My intuition is that it’s not as straightforward because Has produces a function and CoHas consumes a function. Things get more symmetric if we consider the rules for the type classes, something along the lines of

  1. update f . update g = update (f . g)
  2. update id = id
  3. extract . update f = f . extract

Now we can actually talk about bad instances of Has producing update functions breaking these rules. But even with this additional constraint, I’m not sure I follow what the laws for the functions that coupdate accepts should be and how one could derive them from such duality-based reasoning.

Is my notion of Topology correctly encoded in Agda?

Here, I’m trying to encode the notion of Topology. I was wondering if it’s correctly done via a “Propositions as Types” interpretation.

module Topology where  open import Data.Product public using (Σ; Σ-syntax; _×_; _,_; proj₁; proj₂; map₁; map₂) open import Data.Sum  -- Goal: encode the notion of Topology: -- -- Let X be a non-empty set. A set τ of subsets of X -- is said to be a topolgy on X if: --  -- 1. X and the empty set, Ø, belong to τ -- 2. The union of any (finite or infinite) number of sets -- in τ belongs to τ -- 3. The intersection of any two sets in τ belongs to τ -- -- The pair (X,τ) is called a topological space.  -- We can express a notion of a subset `{ x : A | P(x) }`  -- as `Σ[ x ∈ A ] (P x)` (with notion that P is mere  -- proposition in mind).  subset : (X : Set) → (P : (X → Set)) → Set subset X P =   Σ[ a ∈ X ] (P a)  -- If subset is described by a predicate that's describing an -- inhabited proposition for every **element** in X, a set of subsets -- must describe a predicate that's describing an inhabited -- proposition for every **predicate** on X setOfSubsets : (X : Set) → (ℙ : (X → Set) → Set) → Set₁ setOfSubsets X ℙ =   Σ[ P ∈ (X → Set) ]   (ℙ P)  data Ø : Set where data ⊤ : Set where   ⋆ : ⊤  -- Identity predicate P-id : {X : Set} → (X → Set) P-id = λ{_ → ⊤}  -- Zero predicate P₀ : {X : Set} → (X → Set) P₀ = λ{_ → Ø}  isTopology : (X : Set) → (τ : (X → Set) → Set) → Set₁ isTopology X τ =   Σ[ P ∈ (X → Set) ]   Σ[ _ ∈ τ P ]   Σ[ _ ∈ τ P-id ]   Σ[ _ ∈ τ P₀ ]   Σ[ _ ∈ (∀ (A B : X → Set) → (τ A) → (τ B) → (τ (λ x → A x ⊎ B x))) ]   Σ[ _ ∈ (∀ (A B : X → Set) → (τ A) → (τ B) → (τ (λ x → A x × B x))) ]   ⊤ 

What is the relation between the formal definition of strictness and its intuitive notion

I am currently reading Functional Programming in Scala and have encountered a statement in the book I cannot quite make sense of.

On page 67, we are told the formal definition of strictness:

“If the evaluation of an expression runs forever or throws an error instead of returning a definite value, we say that the expression doesn’t terminate, or that it evaluates to bottom. A function f is strict if the expression f(x) evaluates to bottom for all x that evaluate to bottom.”

This definition is somewhat puzzling because the discussion of strictness that comes before this definition has discussed strictness in relation to lazy evaluation. To say that an expression is strict is to say that it is completely evaluated, and not lazy. What is not clear is how this notion of strictness and laziness is at all related to the formal definition given.

Does relation in DBMS corresponds to the programming-language notion of a variable?

Thanks for taking time to read my question. I am fairly new to DBMS and I am following below stated book for clearing my concepts. As far as, I know, the attributes in DBMS corresponds to instance variables in programming-languages (I guess, I have read it in Ullman). I am bit confuse by the analogy provided in the first statement in below paragraph in the book i.e. comparing relation (table) with variable. If I am not wrong then relation means table in DBMS.

Excerpt from Database System Concepts 6th Edition by Abraham Silberschatz, Henry F. Korth, and S.Sudarshan

The concept of a relation corresponds to the programming-language notion of a variable, while the concept of a relation schema corresponds to the programming-language notion of type definition.

Is there a notion of “localization morphism” of schemes?

Let $ A$ be a ring and $ S\subseteq A$ a multiplicative system. Then the localization homomorphism of rings $ \phi:A\to S^{-1}A$ induces a morphism between the spectra: $ f=\mathrm{Spec}(\phi):\mathrm{Spec}(S^{-1}A)\to\mathrm{Spec}(A)$ .

Is there a property of scheme morphisms $ f:X\to Y$ that captures the idea that $ f$ locally looks like the Spec of a localization homomorphism of rings?

Notice that $ S$ doesn’t have to be of the form $ \{f^n\}_{n\geq 0}$ for $ f\in A$ or $ A\smallsetminus \mathfrak{p}$ for a prime ideal $ \mathfrak{p}\subset A$ .


Here’s an attempt. $ f:X\to Y$ is a localization morphism if there is an affine open cover $ \{V_i\}$ of $ Y$ such that every $ f^{-1}(V_i)$ has an affine open cover $ \{U_i\}$ such that the morphism $ f:U_i \to V_i$ corresponds to a homomorphism $ \phi=f^{\sharp}:A\to B$ , where $ A=\mathcal{O}_Y(V_i)$ and $ B=\mathcal{O}_X(U_i)$ , and there is a multuplicative system $ S\subseteq A$ and an isomorphism $ \alpha:B\tilde{\to} S^{-1}A$ such that $ \alpha\circ\phi=\lambda$ , where $ \lambda:A\to S^{-1}A$ is the canonical map to the localization.

Confused about the notion of object definition in python [on hold]

I have read somewhere that “Every python object is a value having a certain type stored at a particular memory location”.The other definition of a python object is each object has a type, an identity, and a value. According to this definition, we would think of an object as a thing holding a value(a rectangular box containing integer value 36 for instance and the entire thing will be called an object, not just the value 36)which is not true when we say 36 is itself an object and there is no such thing containing 36. It does not sound good that int object 36 has an identity(id(36), type int and 36 has a value of 36. This is according to the 2nd definition so both definitions contradict? Please help

A good notion of “minimal field of definition”

Let $ X$ be a variety over a separably closed field $ k$ .

By definition of variety, there exists a subfield $ k_0\subset k$ and a $ k_0$ -variety $ X_0$ such that $ $ X_0\times_{k_0}k\simeq X$ $

There are a number of ways to define the notion of “minimal $ k_0$ ” but they usually involve writing equations for a finite affine cover of $ X$ .

For certain other definitions it’s not clear that a minimal $ k_0$ exists.

Is there an intrinsic definition of “minimal field of definition” of $ X$ , such that such field of definition exists?

Is there a notion of a continuous basis of a Banach space?

If $ X$ is a Banach space, then a Hamel basis of $ X$ is a subset $ B$ of $ X$ such that every element of $ X$ can be written uniquely as a linear combination of elements of $ B$ . And a Schauder basis of $ X$ is a subset $ B$ of $ X$ such that every element of $ X$ can be written uniquely as an infinite linear combination of elements of $ B$ ?

But my question is, is there a notion of a “continuous basis” of a Banach space? That is, a subset $ B$ of $ X$ such that every element of $ X$ can be written in terms of some kind of integral involving elements of $ B$ .

I’m not sure what the integral should look like, but one possibility is this. We define some function $ f:\mathbb{R}\rightarrow X$ , and we let $ B$ be the range of $ f$ . And then for any $ x\in X$ , there exists a unique function $ g:\mathbb{R}\rightarrow\mathbb{R}$ such that $ x = \int_{-\infty}^\infty g(t)f(t)dt$ , where this is a Bochner integral. And if that’s the case we say that $ B$ is a continuous basis for $ X$ . Does any of this make sense?