Re: Drop table vs begin/end transaction

From: Francisco Reyes <lists(at)natserv(dot)com>
To: Pgsql Novice <pgsql-novice(at)postgresql(dot)org>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: Drop table vs begin/end transaction
Date: 2001-11-18 22:01:16
Message-ID: 20011118164702.T70828-100000@zoraida.natserv.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

> On Sun, 18 Nov 2001, Tom Lane wrote:
>
> > Francisco Reyes <lists(at)natserv(dot)com> writes:
> > > My alternative is to use delete from <table>. Besides been slower I wonder
> > > if this would not make my "vacuum analyze" run much slower.
> >
> > Consider TRUNCATE TABLE

How about the effect of truncate/drop table vs the need to do vacuum.

For instance I was loading close to 800K records and at records 669,209
there was a bad character, '\', and the load stopped.

I was on another terminal and didn't know the load had crashed. I did a
'select count(*)' against the table and after a long while it came back
with a '0'. My concern is that all those records which were not loaded
seems to be "there" (marked deleted?).

Does this mean that failed transactions leave all inserted records
behind waiting for a vacuum? If I drop/truncate a table before my load will all
those deleted(?) records dissapear or at least not be associated with the
table anymore? Is it best just to do vacuum after the failed loads?

I plan to do a vacuum analyze when I finish merging the tables and before
I start running my reports, but wonder if just running vacuum may be a
good idea.

In response to

Browse pgsql-novice by date

  From Date Subject
Next Message Horst Herb 2001-11-19 02:02:16 performance benefit with views?
Previous Message Francisco Reyes 2001-11-18 21:12:50 When is it worth it to drop indexes on bulk loads?