| From: | Vick Khera <vivek(at)khera(dot)org> |
|---|---|
| To: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
| Subject: | Re: Partitioning into thousands of tables? |
| Date: | 2010-08-18 15:41:50 |
| Message-ID: | AANLkTi=h-13OPfaO17mVWeGrKK7GZt9ZX8mPD2s0nazr@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On Fri, Aug 6, 2010 at 1:10 AM, Data Growth Pty Ltd
<datagrowth(at)gmail(dot)com> wrote:
> I have a table of around 200 million rows, occupying around 50G of disk. It
> is slow to write, so I would like to partition it better.
>
How big do you expect your data to get? I have two tables partitioned
into 100 subtables using a modulo operator on the PK integer ID
column. This keeps the row counts for each partition in the 5-million
range, which postgres handles extremely well. When I do a mass
update/select that causes all partitions to be scanned, it is very
fast at skipping over partitions based on a quick index lookup.
Nothing really gets hammered.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Vick Khera | 2010-08-18 16:00:15 | Re: MySQL versus Postgres |
| Previous Message | Vick Khera | 2010-08-18 15:37:49 | Re: pg 9.0, streaming replication, fail over and fail back strategies |