Insertion to temp table deteriorating over time

From: "Steven Flatt" <steven(dot)flatt(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Insertion to temp table deteriorating over time
Date: 2006-12-13 16:44:19
Message-ID: 357fa7590612130844q58ef3ca5q31e164f9b006e0fc@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi,

Our application is using Postgres 7.4 and I'd like to understand the root
cause of this problem:

To speed up overall insert time, our application will write thousands of
rows, one by one, into a temp table (same structure as a permanent table),
then do a bulk insert from the temp table to the permanent table. After
this bulk insert is done, the temp table is truncated and the process is
repeated. We do this because Postgres can do many individual inserts to a
temp table much faster than to a permanent table.

The problem we are seeing is that over time, the cost of a single insert to
the temp table seems to grow. After a restart of postgres, a single insert
to the temp table takes about 3ms. Over a few days, this grows to about
60ms per insert. Restarting postgres drops this insert time back to 3ms,
supposedly because the temp table is re-created. Our workaround right now
is to restart the database every few days, but we don't like this solution
much.

Any idea where the bloat is happening? I believe that if we were dropping
and re-creating the temp table over and over, that could result in pg_class
bloat (among other catalog tables), but what is going wrong if we use the
same table over and over and truncate it?

Thanks,
Steve

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2006-12-13 16:46:25 Re: Slow update with simple query
Previous Message asif ali 2006-12-13 16:43:57 Re: Slow update with simple query