We have noticed the following issue with vacuumlo database that have millions of record in pg_largeobject i.e.
WARNING: out of shared memoryFailed to remove lo 155987: ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction.
Why do we need to increase max_locks_per_transaction/shared memory for clean up operation, if there are huge number records how can we tackle this situation with limited memory?. It is reproducible on postgresql-9.1.2. The steps are as following (PFA vacuumlo-test_data.sql that generates dummy data) i.e.
1. ./bin/initdb -D data-vacuumlo_test12. ./bin/pg_ctl -D data-vacuumlo_test1 -l logfile_data-vacuumlo_test1 start3. ./bin/createdb vacuumlo_test4. bin/psql -d vacuumlo_test -f vacuumlo-test_data.sql5. bin/vacuumlo vacuumlo_test
~/work/pg/postgresql-9.1.2/inst$ bin/psql -d vacuumlo_test -f vacuumlo-test_data.sql
~/work/pg/postgresql-9.1.2/inst$ bin/vacuumlo vacuumlo_test
WARNING: out of shared memory
Failed to remove lo 36726: ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
Failed to remove lo 36727: ERROR: current transaction is aborted, commands ignored until end of transaction block
Failed to remove lo 36728: ERROR: current transaction is aborted, commands ignored until end of transaction block
Failed to remove lo 36729: ERROR: current transaction is aborted, commands ignored until end of transaction block
Best Regards,Muhammad Asif Naeem
pgsql-hackers by date
|Next:||From: MUHAMMAD ASIF||Date: 2012-03-20 10:07:39|
|Subject: Re: vacuumlo issue|
|Previous:||From: Claes Jakobsson||Date: 2012-03-20 09:10:29|
|Subject: Re: Regarding column reordering project for GSoc 2012|