When writing a lexer/parser, why/when would an advised developer chose to define the tokens’ types through an enumeration field/type hierarchy?
The closest question I’ve found here so far was Lexing: One token per operator, or one universal operator token? by Jeroen Bollen, but it seems to be more about the ideal deepth of the token type hierarchy.
As for my personal experience I’ve used
Newtonsoft.Json‘s reader, which uses an enumeration, and I’ve read about C#’s
Expression types, which seem to use a hierarchy, but also seem to be more than just tokens.