Re: Performance Problems

From: "Thomas A(dot) Lowery" <tl-lists(at)stlowery(dot)net>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: Performance Problems
Date: 2002-08-24 01:08:25
Message-ID: 20020823210825.A12098@stllnx1.stlassoc.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Alex,

You're able to alter the autocommit value during processing:

$dbh->{AutoCommit} = 0;
$dbh->do( qq{insert into yucky values ('A')} );
$dbh->commit;

$dbh->{AutoCommit} = 1;

$dbh->do( qq{truncate table yucky} );

$dbh->{AutoCommit} = 0;
$dbh->do( qq{insert into yucky values ('A')} );
$dbh->commit;

Do you need the use of a temp database table? If so have you
looked at using a memory resident table (DBD::AnyData)?

Another thing I've found that helps is to cache reference or lookup data.
Memoize is an easy way to cache. This only works for data that doesn't
change between queries select x from y where z = 1 always returns 'F' ...

Tom

On Fri, Aug 23, 2002 at 11:21:22PM +0900, Alex Paulusberger wrote:
> Tom,
> thanks. I do use DELETE FROM since truncate is not an option in
> trnansaction blocks
> and within DBI , autocommit has to be turned off when connecting to the DB.
>
> But maybe you are right, and the overhead not being able to truncate
> tables is bigger than
> not using transaction blocks.
>
> Regards
> Alex
>
> Tom Lane wrote:
>
> >Another thought...
> >
> >Alex Paulusberger <alexp(at)meta-bit(dot)com> writes:
> >
> >>The whole process loops 4,500 times.
> >>For every loop
> >>- a temp table is cleared
> >>
> >
> >How exactly are you clearing the temp table? DELETE FROM isn't a good
> >plan because you'll still have dead tuples in there. You could do a
> >DELETE FROM and then VACUUM, but I'd suggest TRUNCATE instead.

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Bhuvan A 2002-08-24 05:11:44 Re: Preserving datatypes in dblink.
Previous Message Mark Worsdall 2002-08-23 22:43:20 Re: libpq.so.1: cannot open shared object file