Re: Partitioning an existing table

From: Raghavendra <raghavendra(dot)rao(at)enterprisedb(dot)com>
To: Vick Khera <vivek(at)khera(dot)org>
Cc: PG-General Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Re: Partitioning an existing table
Date: 2011-04-26 11:30:42
Message-ID: BANLkTikxo-S3o06hx_mMjcC40CVOHE-45A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

>
> so now when your running application goes to query the table, it gets
> doubles? if you do it in transactions, then how long are you going to cause
> the master table to be locked when doing such a bulk delete?
>
> my point is to minimize service interruption, and that means moving small
> hunks at a time to minimize the locks needed.
>
>
Agreed, if you are pointing to the application..

The partitioning documentation in PG is very clear on how to partition
> a new table. Create child tables, and have triggers that manage
> INSERT, UPDATE and DELETE commands.
> How about doing this with existing massive tables? (Over 120 million rows)
> I could create a new parent table with child tables, and then INSERT
> all these millions of rows to put them into the right partition. But
> is that recommended?

Here, I would go with COPY command rather than INSERT. Firstly, setup the
partition/child tables with relevant triggers and calling function on it.
Use COPY FROM command pointing to parent table by calling the .csv
file(created on MASSIVE table). Triggers will push the data to the
respective child tables. Faster and efficient way.

Best Regards,
Raghavendra
EnterpriseDB Corporation
The Enterprise Postgres Company
Email: raghavendra(dot)rao(at)enterprisedb(dot)com
Blog: http://raghavt.blogspot.com/

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Andrew Sullivan 2011-04-26 11:39:05 Re: 10 missing features
Previous Message Greg Smith 2011-04-26 10:21:21 Re: 10 missing features