Why does it cause issues to use robots.txt and a meta robots tag at the same time?

It seems common wisdom that using X-Robots-Tag/meta robots with robots.txt to block a URL from being indexed "can cause issues", e.g.:

Using both X-Robots-Tag and meta robots on a URL is redundant because they are equivalent, and using both robots.txt and either of the others for a URL can cause issues because robots.txt blocks crawling, and crawling is required for a bot to even see either of the other ones since they are document-level directives.

Source: https://webmasters.stackexchange.com/a/130710

However, I cannot see what the "issue" is. Suppose I have a URL /foo.html for which crawling is blocked using robots.txt. Then the bot will not crawl the page, but it might still be indexed if foo.html is followed from another site. However, if we add a meta robots in foo.html, indexing will be blocked when the bot reaches the page from another site, independently of what happens to be in robots.txt.

Therefore, while it might be redundant to use both methods, I am not sure why doing so can cause foo.html to be indexed.