Re: Performance considerations for very heavy INSERT traffic

From: Vivek Khera <vivek(at)khera(dot)org>
To: Postgresql Performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance considerations for very heavy INSERT traffic
Date: 2005-09-21 16:01:48
Message-ID: 0C638D58-7AA9-4668-BE41-A9CC05DA3DA8@khera.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


On Sep 12, 2005, at 6:02 PM, Brandon Black wrote:

> - using COPY instead of INSERT ?
> (should be easy to do from the aggregators)
>
> Possibly, although it would kill the current design of returning
> the database transaction status for a single client packet back to
> the client on transaction success/failure. The aggregator could
> put several clients' data into a series of delayed multi-row copy
> statements.
>

buffer through the file system on your aggregator. once you "commit"
to local disk file, return back to your client that you got the
data. then insert into the actual postgres DB in large batches of
inserts inside a single Postgres transaction.

we have our web server log certain tracking requests to a local
file. with file locks and append mode, it is extremely quick and has
little contention delays. then every so often, we lock the file,
rename it, release the lock, then process it at our leisure to do
the inserts to Pg in one big transaction.

Vivek Khera, Ph.D.
+1-301-869-4449 x806

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Gurpreet Aulakh 2005-09-21 19:38:20 Query slower on 8.0.3 (Windows) vs 7.3 (cygwin)
Previous Message Vivek Khera 2005-09-21 15:57:42 Re: Performance considerations for very heavy INSERT traffic