From: | Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com> |
---|---|
To: | Joe Koenig <joe(at)jwebmedia(dot)com> |
Cc: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Understanding Transactions |
Date: | 2001-12-12 18:25:38 |
Message-ID: | 20011212102303.G90917-100000@megazone23.bigpanda.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, 12 Dec 2001, Joe Koenig wrote:
> I've been reading through the archive and I see that when doing a large
> amount of inserts it is much faster to wrap a bunch of them in a
> transaction. But here's my question. Say I need to do about 100,000
> inserts and using COPY isn't an option. Is postgres going to do the
> inserts faster in groups of 1,000 or 5,000? I know that letting each
> insert be in its own transaction creates a lot of overhead, but I didn't
> know if putting 5,000 inserts into a transaction created overhead for
> that transaction. Hopefully my question makes sense. Thanks,
Well, it depends on the schema to some extent probably. If the table
has foreign keys, there was a problem (it's been fixed but I don't
know in what version) with the deferred trigger manager on long
transactions. 1k or 5k rows is probably okay in any case.
From | Date | Subject | |
---|---|---|---|
Next Message | Dado Feigenblatt | 2001-12-12 18:27:37 | Re: User rights across databases |
Previous Message | Tom Lane | 2001-12-12 18:01:28 | Re: ACK table corrupted, unique index violated. |