I have a doubt about the Unit Type in the context of Type Theory and its use in different case scenarios.

To start with, a Unit Type can be seen as a nullary Product Type, namely Unit, with one only value term which is the empty tuple, `()`

. Also, there exist a **unique** map from any type to the Unit.

Now, it happens that the use of the Unit Type goes beyond such trivial definition, and is in fact used in the definition of Algebraic Data Types, which are Sums of Product Types. Specifically, it is possible to represent the concept of an Enumerated Type using a Sum of Unit Types, e.g. in StandardML we might have:

`datatype suit = HEARTS | CLUBS | DIAMONDS | SPADES `

where HEARTS, CLUBS, DIAMONDS and SPADES are nullary Product Types and therefore all isomorphic to the Unit.

My doubt is the following: if there exist only one element of Unit, how can the type system distinguish between the four distinct instances used in the Sum Type above (five instances if we also consider the empty tuple…)? I understand that they can be considered all equal to each other up to isomorphism, but they are extensionally different and in fact, even by considering only the `suit`

we are supposed to pattern-match on them…