Re: VACUUM ANALYZE out of memory

From: Michael Akinde <michael(dot)akinde(at)met(dot)no>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: VACUUM ANALYZE out of memory
Date: 2007-12-14 10:59:51
Message-ID: 47626227.6090804@met.no
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

[Synopsis: VACUUM FULL ANALYZE goes out of memory on a very large
pg_catalog.pg_largeobject table.]

Simon Riggs wrote:
> Can you run ANALYZE and then VACUUM VERBOSE, both on just
> pg_largeobject, please? It will be useful to know whether they succeed
ANALYZE:

INFO: analyzing "pg_catalog.pg_largeobject"
INFO: "pg_largeobject": scanned 3000 of 116049431 pages, containing
18883 live rows and 409 dead rows; 3000 rows in sample, 730453802
estimated total rows

VACUUM VERBOSE:

INFO: vacuuming "pg_catalog.pg_largeobject"
INFO: scanned index "pg_largeobject_loid_pn_index" to remove 106756133
row versions
DETAIL: CPU 38.88s/303.43u sec elapsed 2574.24 sec.
INFO: "pg_largeobject": removed 106756133 row versions in 13190323 pages
DETAIL: CPU 259.42s/113.20u sec elapsed 14017.17 sec.
INFO: index "pg_largeobject_loid_pn_index" now contains 706303560 row
versions in 2674471 pages
DETAIL: 103960219 index row versions were removed.
356977 index pages have been deleted, 77870 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.02 sec.
INFO: "pg_largeobject": found 17489832 removable, 706303560
nonremovable row versions in 116049431 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 36000670 unused item pointers.
64493445 pages contain useful free space.
0 pages are entirely empty.
CPU 1605.42s/1107.48u sec elapsed 133032.02 sec.
WARNING: relation "pg_catalog.pg_largeobject" contains more than
"max_fsm_pages" pages with useful free space
HINT: Consider using VACUUM FULL on this relation or increasing the
configuration parameter "max_fsm_pages".
VACUUM

(This took some 36+ Hours. It will be interesting to see what happens
when we add another 20 years worth of data to the 13 years already in
the DB).

ANALYZE:

INFO: analyzing "pg_catalog.pg_largeobject"
INFO: "pg_largeobject": scanned 3000 of 116049431 pages, containing
17830 live rows and 0 dead rows; 3000 rows in sample, 689720452
estimated total rows

I will lower the SharedMem and MaintenanceWorkMem settings as suggested
in earlier posts before leaving for home this evening, and then let it
run a VACUUM FULL ANALYZE. I remain dubious though - as mentioned, the
first test I did had quite low settings for this, and we still had the
memory crash. No reason not to try it though.

Over Christmas, we will be moving this over on a 64-bit kernel and 16
GB, so after that we'll be able to test on the database with > 1GB
maintenance memory as well.

Regards,

Michael A.
Database Architect, met.no

Attachment Content-Type Size
michael.akinde.vcf text/x-vcard 287 bytes

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2007-12-14 11:20:51 Re: BUG #3811: Getting multiple values from a sequence generator
Previous Message Simon Riggs 2007-12-14 10:20:39 Re: BUG #3811: Getting multiple values from a sequence generator