Is renaming schemas more efficient / faster than drop/create in a single transaction

Given the constraint (from customer side) to having only one database – how can I most effectively deploy new versions of data into a production-facing application.

The new data is packaged into a (-Fc) dump file. This data is supposed to replace the existing data, so restoring it must be done by first dropping the existing tables and then restoring the new ones.

The naive approach of just importing the dump file with pg_restore with --single-transaction will lock the whole database which will cause the application dependent on that data to halt/suspend queries.

My intended approach is to utilize schemas. Basically, the application will have set its connection string to use mydb.public. The import imports all new data into mydb.next.

Then, rename mydb.public to mydb.old, rename mydb.next to mydb.public and drop mydb.old.

Would this approach result in a shorter downtime than importing directly? I am not sure how renames of schemas work internally in postgres, but from my surface understanding, it should be a more efficient approach