Re: installcheck failing on psql_crosstab

From: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: installcheck failing on psql_crosstab
Date: 2016-06-07 03:41:31
Message-ID: CAB7nPqR4G5GXkfrCfVXKctvG-Djx_1MToATQo7vw6Twf6dvpRg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Jun 7, 2016 at 12:31 PM, Michael Paquier
<michael(dot)paquier(at)gmail(dot)com> wrote:
> On Tue, Jun 7, 2016 at 12:28 AM, Alvaro Herrera
> <alvherre(at)2ndquadrant(dot)com> wrote:
>> Tom Lane wrote:
>>> Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> writes:
>>
>>> > I can't imagine that the server is avoiding hash aggregation on a 1MB
>>> > work_mem limit for data that's a few dozen of bytes. Is it really doing
>>> > that?
>>>
>>> Yup:
>>
>> Aha. Thanks for testing.
>>
>>> Now that you mention it, this does seem a bit odd, although I remember
>>> that there's a pretty substantial fudge factor in there when we have
>>> no statistics (which we don't in this example). If I ANALYZE ctv_data
>>> then it sticks to the hashagg plan all the way down to 64kB work_mem.
>>
>> Hmm, so we could solve the complaint by adding an ANALYZE. I'm open to
>> that; other opinions?
>
> We could just enforce work_mem to 64kB and then reset it.

Or just set up work_mem to a wanted value for the duration of the run
of psql_crosstab. Attached is my proposal.
--
Michael

Attachment Content-Type Size
psql-crosstab-test.patch text/x-diff 2.1 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2016-06-07 03:56:42 Re: COMMENT ON, psql and access methods
Previous Message Sridhar N Bamandlapally 2016-06-07 03:37:45 Re: [HACKERS] OUT parameter and RETURN table/setof