Re: exponential performance decrease in ISD transaction

From: Heikki Linnakangas <hlinnaka(at)iki(dot)fi>
To: John Nash <postgres(dot)dba(dot)needs(dot)help(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: exponential performance decrease in ISD transaction
Date: 2012-08-31 13:36:16
Message-ID: 5040BDD0.9070207@iki.fi
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 31.08.2012 15:27, John Nash wrote:
> Program 1: dbtransfromfile: this program creates a simple table
> consisting of a one int column table. After the creation, the program
> inserts 1000 tuples in the table, which are never deleted, after that
> the program reads a transaction pattern from a given file and executes
> it a number of times determined when the program is launched.
>
> The transaction we are launching is (INSERT/SELECT/DELETE) the following:
>
> insert into T_TEST values (1);select * from T_TEST where
> c1=1000;delete from T_TEST where c1=1;commit;

Sounds like the table keeps growing when rows are inserted and
subsequently deleted. PostgreSQL doesn't immediately remove deleted
tuples from the underlying file, but simply marks them as deleted. The
rows are not physically removed until autovacuum kicks in and cleans it
up, or the table is vacuumed manually.

I'd suggest creating an index on t_test(c1), if there isn't one already.
It's not helpful when the table is small, but when the table is bloated
with all the dead tuples from the deletions, it should help to keep the
access fast despite the bloat.

- Heikki

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Dave Cramer 2012-08-31 13:50:08 Re: JDBC 5 million function insert returning Single Transaction Lock Access Exclusive Problem
Previous Message John Nash 2012-08-31 12:27:24 exponential performance decrease in ISD transaction