Efficiently DROP huge table with cascading replication in PostgreSQL

What I have:

Database: PostgreSQL 9.3

Table T,

  • Structure: 10 integers/bools and 1 text field
  • Size: Table 89 GB / Toast 1046 GB
  • Usage: about 10 inserts / minute
  • Other: reltuples 59913608 / relpages 11681783

Running cascading replication: Master -> Slave 1 -> Slave 2

  • Replication Master -> Slave 1 is quite fast, a good channel.
  • Replication Slave 1 -> Slave 2 is slow, cross-continent, about 10 Mbit/s.

This is a live, used database with about 1.5TB more data in it.

What’s needed to be done:

  • Drop all data to start with a fresh setup (to do constant cleanups and not allow it to grow this big).

Question: What would be the most efficient way to achieve this:

  • without causing huge lags between Master and Slave 1
  • without causing Slave 2 to get irreversibly lagged to a state where catching up is not possible

As I see it:

  • Safe way – do a copy, swap places, DELETE data constantly watching lag
  • Other way – do a copy, swap places, DROP table – but this would cause enormous amounts of data at once and Slave 2 would get lost?