It is written (cf. Moore 1980) that mathematical logicians (e.g. Peirce, Schroder, Hilbert) at the turn of the last century did not yet distinguish between syntax and semantics when formulating logical and logico-mathematical theories.
I am trying to understand how conflating syntax and semantics could (as Moore claims) encourage a move to infinitary logics.
One straightforward example concerns how to understand quantifiers. A purely syntactic understanding of the quantifiers would regard them as certain formal expressions governed by certain syntactical rules (eg. that $ \exists$ distributes over logical disjunctions while $ \forall$ does not, intro and elim rules in proof).
However, Peirce is said to have understood existential and universal quantifiers as identical with (possibly) infinite disjunctions and conjunctions. This makes the quantification dependent on the domain in a way that the syntactic version does not seem to be.
Another case is Lowenheim’s requirement that when introducing a first-order expression one must also specify the domain of individuals, where the quantifier ranges over the name of each individual in the domain. Here again Moore claims that the conflation of syntax and semantics encouraged Lowenheim’s move to a language allowing infinite conjunctions, disjunctions, and even transfinitely-many quantifiers.
In what ways does Lowenheim’s method diverge from a “pure” syntactical approach? Is the main point that by understanding the quantifier as ranging over names of individuals rather the individuals themselves, Lowenheim commits himself to a language with infinitely many constants in order for any of the quantifiers to have infinite domains? (This would seem to be an entirely different point from the way that Peirce was supposed to have been led to infinite language as described above.)