Re: VACUUM FULL out of memory

From: Michael Akinde <michael(dot)akinde(at)met(dot)no>
To: Michael Akinde <michael(dot)akinde(at)met(dot)no>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: VACUUM FULL out of memory
Date: 2008-01-07 09:40:23
Message-ID: 4781F387.6070801@met.no
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

As suggested, I tested a VACUUM FULL ANALYZE with 128MB shared_buffers
and 512 MB reserved for maintenance_work_mem (on a 32 bit machine with 4
GB RAM). That ought to leave more than enough space for other processes
in the system. Again, the system fails on the VACUUM with the following
error (identical to the error we had when maintenance_work_mem was very
low.

INFO: vacuuming "pg_catalog.pg_largeobject"
ERROR: out of memory
DETAIL: Failed on request of size 536870912

I've now also tested a 64-bit setup with 16 GB RAM, with 2 GB
maintenance_work_mem; this time on PostgreSQL 8.2.5.

INFO: vacuuming "pg_catalog.pg_largeobject"
ERROR: invalid memory alloc request size 1073741824

It strikes me as somewhat worrying that VACUUM FULL ANALYZE has so much
trouble with a large table. Granted - 730 million rows is a good deal -
but it's really not that much for a large database. I'd expect an
operation on such a table to take time, of course, but not to
consistently crash out of memory.

Any suggestions as to what we can otherwise try to isolate the problem?

Regards,

Michael Akinde
Database Architect, met.no

Michael Akinde wrote:
> [Synopsis: VACUUM FULL ANALYZE goes out of memory on a very large
> pg_catalog.pg_largeobject table.]
>
> Simon Riggs wrote:
>> Can you run ANALYZE and then VACUUM VERBOSE, both on just
>> pg_largeobject, please? It will be useful to know whether they succeed
> ANALYZE:
>
> INFO: analyzing "pg_catalog.pg_largeobject"
> INFO: "pg_largeobject": scanned 3000 of 116049431 pages, containing
> 18883 live rows and 409 dead rows; 3000 rows in sample, 730453802
> estimated total rows
>
> VACUUM VERBOSE:
>
> INFO: vacuuming "pg_catalog.pg_largeobject"
> INFO: scanned index "pg_largeobject_loid_pn_index" to remove
> 106756133 row versions
> DETAIL: CPU 38.88s/303.43u sec elapsed 2574.24 sec.
> INFO: "pg_largeobject": removed 106756133 row versions in 13190323 pages
> DETAIL: CPU 259.42s/113.20u sec elapsed 14017.17 sec.
> INFO: index "pg_largeobject_loid_pn_index" now contains 706303560 row
> versions in 2674471 pages
> DETAIL: 103960219 index row versions were removed.
> 356977 index pages have been deleted, 77870 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.02 sec.
> INFO: "pg_largeobject": found 17489832 removable, 706303560
> nonremovable row versions in 116049431 pages
> DETAIL: 0 dead row versions cannot be removed yet.
> There were 36000670 unused item pointers.
> 64493445 pages contain useful free space.
> 0 pages are entirely empty.
> CPU 1605.42s/1107.48u sec elapsed 133032.02 sec.
> WARNING: relation "pg_catalog.pg_largeobject" contains more than
> "max_fsm_pages" pages with useful free space
> HINT: Consider using VACUUM FULL on this relation or increasing the
> configuration parameter "max_fsm_pages".
> VACUUM
>
> (This took some 36+ Hours. It will be interesting to see what happens
> when we add another 20 years worth of data to the 13 years already in
> the DB).
>
> ANALYZE:
>
> INFO: analyzing "pg_catalog.pg_largeobject"
> INFO: "pg_largeobject": scanned 3000 of 116049431 pages, containing
> 17830 live rows and 0 dead rows; 3000 rows in sample, 689720452
> estimated total rows
>
> I will lower the SharedMem and MaintenanceWorkMem settings as
> suggested in earlier posts before leaving for home this evening, and
> then let it run a VACUUM FULL ANALYZE. I remain dubious though - as
> mentioned, the first test I did had quite low settings for this, and
> we still had the memory crash. No reason not to try it though.
>
> Over Christmas, we will be moving this over on a 64-bit kernel and 16
> GB, so after that we'll be able to test on the database with > 1GB
> maintenance memory as well.
>
> Regards,
>
> Michael A.
> Database Architect, met.no
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo(at)postgresql(dot)org so that your
> message can get through to the mailing list cleanly
>

Attachment Content-Type Size
michael.akinde.vcf text/x-vcard 287 bytes

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Csaba Nagy 2008-01-07 11:14:43 Re: Dynamic Partitioning using Segment Visibility Maps
Previous Message KaiGai Kohei 2008-01-07 09:36:42 Re: Spoofing as the postmaster