From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | Jaime Soler <jaime(dot)soler(at)gmail(dot)com>, Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Hash join in SELECT target list expression keeps consuming memory |
Date: | 2018-03-21 14:55:08 |
Message-ID: | 54bfec55-4727-05af-747c-da319e288820@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 03/21/2018 02:18 PM, Jaime Soler wrote:
> Hi,
>
> We still get out of memory error during pg_dump execution
> ...
> pg_dump: reading row security enabled for table "public.lo_table"
> pg_dump: reading policies for table "public.lo_table"
> pg_dump: reading publications
> pg_dump: reading publication membership
> pg_dump: reading publication membership for table "public.lo_table"
> pg_dump: reading subscriptions
> pg_dump: reading large objects
> out of memory
>
Hmmmm ... that likely happens because of this for loop copying a lot of
data:
https://github.com/postgres/postgres/blob/master/src/bin/pg_dump/pg_dump.c#L3258
But I'm having trouble verifying that, because the query fetching the
list of objects is rather expensive with this number of large objects.
How long does it take for you? I wonder if there's a way to make the
query faster.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Teodor Sigaev | 2018-03-21 15:02:00 | Re: General purpose hashing func in pgbench |
Previous Message | David Steele | 2018-03-21 14:51:04 | Re: Re: Rewriting the test of pg_upgrade as a TAP test - take two |