Re: [GENERAL] Performance

From: Jim Richards <grumpy(at)cyber4(dot)org>
To: "Jason C(dot) Leach" <jcl(at)mail(dot)ocis(dot)net>, pgsql-list <pgsql-general(at)hub(dot)org>
Subject: Re: [GENERAL] Performance
Date: 1999-10-29 08:16:30
Message-ID: 199910290816.EAA62562@hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


I don't know about the DBI specifically, but it should have this,
try doing the inserts as

BEGIN WORK

INSERT ...
INSERT ...
INSERT ...
INSERT ...

COMMIT WORK

this will wrap all the transactions in one statement, so during the inserts
other process won't be able to see the changes until the commit is done.
Also mean if there in an error during the insert sequence, it can all
be rolled back without a problem.

>I've been playing with pgsql for a few days now and am getting the hang
>of it. I just did a loop that inserts a few thousand records into a
>table. I did a statement, prepare, execute; it worked fine although pg
>seemed to access the hd for every insert. Is there a way to cache
>inserts and then write them all at once later. I'm using Perl with
>DBD::Pg/DBI and see with DBI there is a prepare_cached, and a commit.
>Not much in the way of docs for the modules though.
>
>Perhaps I should be doing statement, prepare, statement, prepare,
>commit?

--
Subvert the dominant paradigm
http://www.cyber4.org/members/grumpy/index.html

In response to

  • Performance at 1999-10-29 07:47:18 from Jason C. Leach

Browse pgsql-general by date

  From Date Subject
Next Message Charles Tassell 1999-10-29 08:22:23 Re: [GENERAL] Performance
Previous Message Herbert Liechti 1999-10-29 08:08:32 Re: [GENERAL] next steps