Re: Heavy contgnous load

From: Craig Ringer <ringerc(at)ringerc(dot)id(dot)au>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Heavy contgnous load
Date: 2011-10-19 00:56:43
Message-ID: 4E9E204B.2040608@ringerc.id.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 10/18/2011 08:09 PM, kzsolt wrote:

> What is important think for this task I do not need any tranasction. So the
> COMMIT and ROLLBACK feature is useless.
> The question is how I minimize the rollback activity to free resoureces?

Actually, you do need transactions, because they're what prevents your
database from being corrupted or left in a half-updated state if/when
the database server loses power, crashes, etc.

Presumably when you say "rollback activity" you mean the overheads
involved in supporting transactional, atomic updates? If so, there isn't
much you can do in an INSERT-only database except try to have as few
indexes as possible and do your inserts inside transactions in batches,
rather than one-by-one as individual statements.

Consider logging to a flat file to accumulate data, then COPYing data in
batches into PostgreSQL.

An alternative would be to write your data to an unlogged table
(PostgreSQL 9.1+ only) then `INSERT INTO ... SELECT ...' it into the
main table(s) in batches. Unlogged tables avoid most of the overheads of
the write-ahead log crash safety, but they do that by NOT BEING CRASH
SAFE. If your server, or the PostgreSQL process, crashes then unlogged
tables will be ERASED. If you can afford to lose a little data in this
case, you can use unlogged tables as a staging area.

--
Craig Ringer

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message d.davolio@mastertraining.it 2011-10-19 09:46:42 How many Cluster database on a single server
Previous Message Kevin Grittner 2011-10-18 20:40:26 Re: Heavy contgnous load