How to display an infinite ratio to best communicate that the set contains exclusively one type?

I have a reporting dashboard displaying a ratio of apples to oranges normalized to 1:x such as 10 apples and 20 oranges is “1:2”.

How would I best display 0 apples 20 oranges?

  • “1:Infinity” is an option but looks weird
  • “0:20” or “0:1” shouldn’t be options due to the requirement of normalizing it in the form of “1:x”, but really the intent is to communicate to the user that for this cell of the table there are “only oranges”

get all properties of custom post type

Specifically, I’m trying to get get the ‘Layout Group’ of a given ThemeREX Custom Layout (which as I understand is just a Custom Post Type), but am also interested in seeing what other properties are available for that post.

I tried

print_r(get_post_meta(get_post(1738, 'ARRAY_A', 'display'),"",true)); 

but all that was returned was the number 1.

I”m guessing the meta is not what I’m looking for. Is there a way to iterate through all the custom properties that are registered with that post’s CPT?

Is, beta reduction in type theory being considered as counit for hom-tensor adjunction in category theory, a denotational or operational semantic?

In the article at nlab about the relation between type theory and category theory, it is said that “beta reduction” in type theory corresponds to “counit for hom-tensor adjunction” in category theory and also “substitution” corresponds to “composition of classifying morphisms / pullback of display maps” correspondingly.

Are these considered denotational or operational semantics?

A notion dual to a product type having a given type

Consider this class:

class Has record part where   extract :: record -> part   update :: (part -> part) -> record -> record 

It captures the notion of some product type record having a field of the type part which can be extracted from the record, or the functions on which can be used to update the whole record (in a lens-ish manner).

What happens if we turn the arrows? Following the types and noting that a sum type is dual to a product type, and a “factor” in a product type is analogous to an option in a sum type, we get

class CoHas sum option where   coextract :: option -> sum   coupdate :: (sum -> sum) -> option -> option 

Firstly, is this line of reasoning correct at all?

If it is, what is the meaning of coextract and coupdate? Obviously, coextract produces the sum out of one of its options, so it might as well be called inject or something similar.

coupdate is more interesting. My intuition is that, given a function f that updates a sum type, it gives us a function that can be used to update one of its options. But, obviously, not every f is fit for this! Consider

badF :: Either Int Char -> Either Int Char badF (Left n) = n badF (Right _) = Left 0 

then coupdate badF does not make sense where coupdate is taken from CoHas (Either Int Char) Char. One requirement seems to be that the function passed to coupdate must not change the tags of the sum type.

So here’s the second question: what’s the dual of this requirement in the Has/update case?

My intuition is that it’s not as straightforward because Has produces a function and CoHas consumes a function. Things get more symmetric if we consider the rules for the type classes, something along the lines of

  1. update f . update g = update (f . g)
  2. update id = id
  3. extract . update f = f . extract

Now we can actually talk about bad instances of Has producing update functions breaking these rules. But even with this additional constraint, I’m not sure I follow what the laws for the functions that coupdate accepts should be and how one could derive them from such duality-based reasoning.

minimum number of bits required for R-type, I Type and J type instructions

I am new to Computer architecture and studying for a midterm and am stuck on this question, it will be much appreciated if someone could provide an easy to understand solution and briefly explain the approach

For a computer using a MIPS-like instruction set, 64 instructions are reserved for R-type instructions and 63 more instructions are reserved for I-type and J-type instructions in total.

Also, there are 128 registers in the system and the size of one register is 64 bits.

(a) According to the given configurations, what would be the minimum number of bits required for an instruction? Note that,

  • In MIPS, opcode of the all R-type instructions will be 0 (zero). The arithmetic operation is selected according to the function code (funct).
  • MIPS is a RISC instruction set, therefore its instruction length is fixed
  • (Editor’s note: presumably we’re meant to assume 3-operand op dst, src1, src2 instructions so we need 3 register fields in R-type instructions.)

(b) What would be the maximum size of the memory for this computer? Note that;

  • In MIPS ISA, there is only register addressing mode for memory accesses, which means the address of the memory location is stored in a register.
    (editor’s note: actually register + 16-bit sign-extended immediate offset, like
    lw $ t0, 1234($ t1). But that’s not really relevant; address calculation happens with wrapping to register width.)
  • In the given ISA, memory is byte addressable.

Type Ahead not working

We are using our custom display template with results script web part

We are unable to get type ahead working

I have tried below

  1. Set client type to: ContentSearchRegular
  2. Set client type to: SiteQuery-All
  3. Set Result type in the display template
  4. Look for piPageImpression in the response object of the search result object

Any clues would be highly appreciated

Thanks

Are type variables really only used in mathematical conversation about types?

Are type variables really only used in mathematical conversation about types? i.e. are type variables (meta-variables that only contain the type classification label) only exist in proofs for types but not in real programming languages? At least is this true for monomorphic types? i.e. I can’t actually define:

$ $ \texttt{ let id x = x }: \tau$ $

unless the type $ \tau$ has already been defined or is built in? i.e. type variables only exist for symbolic manipulation of type proofs for humans (at least for monomorphism).


Extra thoughts:

I am trying to understand how one defines the identity function $ \texttt{let id x = x}$ in monomorphic type systems. I believe the motivation for polymorphism is to allow to apply the same syntactic definition of a function to different data types that are NOT the same, like: $ \texttt{id 2}$ and $ \texttt{id true}$ without the type checker freaking out at us when it tries to run those (if the language is dynamically typed…I guess in static the compiler would complain at the first instance of the identity function being applied to the wrong type when we try to compile it to usable code).

So to my understanding, if we define types as a set of possible data values, so say $ \texttt{int}$ defines the set of all integers. Then the set $ Set_{int \to int}$ is the set of all functions that can map integers to integers. But if we have $ Set_{\forall \alpha . \alpha \to \alpha}$ and we consider the identity function corresponding to both sets so $ id_{\forall \alpha . \alpha \to \alpha} \in Set_{\forall \alpha . \alpha \to \alpha}$ and $ id_{int} \in Set_{int \to int}$ then when both identity functions are applied to integers they will behave exactly the same BUT they are not the same function (because they are not the same data value, the data value for one is identity on only integers while the other is give me any type and I will act like its corresponding identity. So in particular $ Set_{\alpha \to \alpha} = Set_{int \to int} \cup \dots \cup Set_{string \to string}$ (i.e. the identity $ id_{\alpha \to \alpha}$ is just the identity function for a specific type but not actually a function that is able to process ANY type, its one and in this case this is why the for all is important).

So my question is, is $ id_{\forall \alpha . \alpha \to \alpha} \in Set_{\alpha \to \alpha}$ ? I guess I am trying to clarify the dots … What does it cover? Perhaps it depends on the context of how the types are formally defined? If the $ \alpha$ stands for type variables, but type variables are only allowed to be monomorphic by definition (i.e. say the type constructor for the meta-langauge is recursive but only in the monomorphism where we can’t introduce the forall quantifier) then the $ id_{\forall \alpha . \alpha \to \alpha} \not\in Set_{\alpha \to \alpha}$ . But if the recursive definition for polymorphic types is recursive at the polymorphic step then perhaps $ id_{\forall \alpha . \alpha \to \alpha} \in Set_{\alpha \to \alpha}$ is true? Actually I don’t think so…


Context: CS 421


Related:

  • What does $ \forall \alpha_1, \dots , \alpha_n . \tau $ mean formally as a type?

  • What is the difference between $ \alpha \to \alpha $ vs $ \forall \alpha. \alpha \to \alpha$ ?