I have a huge vacuum_l table filled with oids I can safely delete. As reported in the vacuumlo.c source code:
We don’t want to run each delete as an individual transaction, because the commit overhead would be high. However, since 9.0 the backend will acquire a lock per deleted LO, so deleting too many LOs per transaction risks running out of room in the shared-memory lock table. Accordingly, we delete up to transaction_limit LOs per transaction.
What is the best way to write a script iterating through the whole table and limiting the calls to lo_unlink() to n per transaction?
So far I tried to rewrite the vacuumlo.c sources in sql:
begin; DECLARE myportal CURSOR WITH HOLD FOR SELECT lo FROM vacuum_l; for recordvar IN myportal loop -- this won't work begin; FETCH FORWARD 500 FROM myportal; -- call to lo_unlink() here commit; end loop;
Using vacuumlo utility is not an option for me since hybernate stored oids in text field.
Thanks for any help.