Is password strength exclusively a function of character set size multiplied by password length-in-characters?


My team is responsible for the creation and management of many passwords (hundreds), which we do almost exclusively programmatically (all generation is random-enough). We leverage a variety of of tools for automating different aspects of our infrastructure, including but not limited to Ansible, Docker, Jenkins, and Terraform. These tools all have their own peculiarities as to how they consume and expose strings in various contexts, of which POSIX shells or shell-like execution environments are worth noting specifically. We frequently call these tools from within one another, passing context between then for a variety of purposes.

The problem: it is frustratingly common to spend a great deal of time investigating mysterious bugs, only to find out that the root cause was the corruption of (or failure to properly parse) a password at some point in the chain of context passing, often (but not always) in a POSIX shell-like context, due specifically to the presence of special characters in the password.

The question: given a requirement for password complexity that is expressed in password length for a given set of characters, is there any difference from a security perspective in achieving the same complexity by constructing a password of greater length, from a reduced set?

To put this concretely: if we have been generating "sufficiently strong" passwords of length x from the 95 printable ASCII characters, and we reduce the character set to the 62 lowercase, uppercase, and numeric ASCII characters, will there be any loss of password security whatsoever by constructing passwords of length y, where 62^y >= 95^x?