Re: Populating huge tables each day

From: "Jim C(dot) Nasby" <decibel(at)decibel(dot)org>
To: Dann Corbit <DCorbit(at)connx(dot)com>
Cc: Ben-Nes Yonatan <da(at)canaan(dot)co(dot)il>, pgsql-general(at)postgresql(dot)org
Subject: Re: Populating huge tables each day
Date: 2005-06-28 20:49:18
Message-ID: 20050628204918.GE50976@decibel.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Jun 28, 2005 at 10:36:58AM -0700, Dann Corbit wrote:
> > Nope, truncate is undoubtedly faster. But it also means you would have
> > downtime as you mentioned. If it were me, I'd probably make the
> > trade-off of using a delete inside a transaction.
>
> For every record in a bulk loaded table?
Sure. If the data's only being loaded once a day, it probably doesn't
matter if that delete takes 10 minutes.

> If it were that important that both servers be available all the time, I
> would bulk load into a second table with the same shape and then rename
> when completed.
Interesting idea, though the problem is that AFAIK everything will block
on the rename. If everything didn't block though, this might be a better
way to do it, although it potentially complicates the code greatly
(think about needing to add indexes, rebuild RI, etc.)
--
Jim C. Nasby, Database Consultant decibel(at)decibel(dot)org
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Matt Miller 2005-06-28 21:06:16 Re: Building Latest (8.1)
Previous Message Zlatko Matic 2005-06-28 20:16:25 Re: automating backup ?