Re: Increase value of OUTER_VAR

From: Andrey Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>
To: Julien Rouhaud <rjuju123(at)gmail(dot)com>, Amit Langote <amitlangote09(at)gmail(dot)com>
Cc: David Rowley <dgrowleyml(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Increase value of OUTER_VAR
Date: 2021-03-04 07:43:56
Message-ID: 8081e0b7-c373-d37c-c5f7-486482aeeb5e@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 3/3/21 12:52, Julien Rouhaud wrote:
> On Wed, Mar 3, 2021 at 4:57 PM Amit Langote <amitlangote09(at)gmail(dot)com> wrote:
>>
>> On Wed, Mar 3, 2021 at 5:52 PM David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
>>> Something like 1 million seems like a more realistic limit to me.
>>> That might still be on the high side, but it'll likely mean we'd not
>>> need to revisit this for quite a while.
>>
>> +1
>>
>> Also, I got reminded of this discussion from not so long ago:
>>
>> https://www.postgresql.org/message-id/flat/16302-e45634e2c0e34e97%40postgresql.org
Thank you
>
> +1
>
Ok. I changed the value to 1 million and explained this decision in the
comment.
This issue caused by two cases:
1. Range partitioning on a timestamp column.
2. Hash partitioning.
Users use range distribution by timestamp because they want to insert
new data quickly and analyze entire set of data.
Also, in some discussions, I see Oracle users discussing issues with
more than 1e5 partitions.

--
regards,
Andrey Lepikhov
Postgres Professional

Attachment Content-Type Size
0001-Increase-values-of-special-varnos-to-1-million.patch text/plain 1.3 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2021-03-04 08:10:03 Re: Track replica origin progress for Rollback Prepared
Previous Message tsunakawa.takay@fujitsu.com 2021-03-04 07:39:55 RE: [POC] Fast COPY FROM command for the table with foreign partitions