Skip site navigation (1) Skip section navigation (2)

Re: Bunching "transactions"

From: Chris Browne <cbbrowne(at)acm(dot)org>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Bunching "transactions"
Date: 2007-10-25 17:55:23
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
jeandavid8(at)verizon(dot)net (Jean-David Beyer) writes:
> But what is the limitation on such a thing? In this case, I am just
> populating the database and there are no other users at such a time. I am
> willing to lose the whole insert of a file if something goes wrong -- I
> would fix whatever went wrong and start over anyway.
> But at some point, disk IO would have to be done. Is this just a function of
> how big /pgsql/data/postgresql.conf's shared_buffers is set to? Or does it
> have to do with wal_buffers and checkpoint_segments?

I have done bulk data loads where I was typically loading hundreds of
thousands of rows in as a single transaction, and it is worth
observing that loading in data from a pg_dump will do exactly the same
thing, where, in general, each table's data is loaded as a single

It has tended to be the case that increasing the number of checkpoint
segments is helpful, though it's less obvious that this is the case in
8.2 and later versions, what with the ongoing changes to checkpoint

In general, this isn't something that typically needs to get tuned
really finely; if you tune your DB, in general, "pretty big
transactions" should generally work fine, up to rather large sizes of
"pretty big."
"Why use Windows, since there is a door?"
-- <fachat(at)galileo(dot)rhein-neckar(dot)de> Andre Fachat

In response to


pgsql-performance by date

Next:From: Jean-David BeyerDate: 2007-10-25 19:15:31
Subject: Re: Bunching "transactions"
Previous:From: Tom LaneDate: 2007-10-25 16:21:53
Subject: Re: Bunching "transactions"

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group