From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Alvaro Herrera" <alvherre(at)commandprompt(dot)com>, tfinneid(at)student(dot)matnat(dot)uio(dot)no, "Gregory Stark" <stark(at)enterprisedb(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: select count() out of memory |
Date: | 2007-10-25 14:44:02 |
Message-ID: | dcc563d10710250744v60f8ffd3ob3b577af29bfe3ec@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 10/25/07, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> > tfinneid(at)student(dot)matnat(dot)uio(dot)no wrote:
> >> I did a test previously, where I created 1 million partitions (without
> >> data) and I checked the limits of pg, so I think it should be ok.
>
> > Clearly it's not.
>
> You couldn't have tested it too much --- even planning a query over so
> many tables would take forever, and actually executing it would surely
> have run the system out of locktable space before it even started
> scanning.
>
> The partitioning facility is designed for partition counts in the tens,
> or maybe hundreds at the most.
I've had good results well into the hundreds, but after about 400 or
so, things start to get a bit wonky.
From | Date | Subject | |
---|---|---|---|
Next Message | Reg Me Please | 2007-10-25 14:46:26 | Re: Crosstab Problems |
Previous Message | tfinneid | 2007-10-25 14:42:18 | Re: select count() out of memory |