Skip site navigation (1) Skip section navigation (2)

temp table "on commit delete rows": transaction overhead

From: Artiom Makarov <artiom(dot)makarov(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: temp table "on commit delete rows": transaction overhead
Date: 2010-03-30 11:46:26
Message-ID: d448ea361003300446s3f750d2fj294093419699cb36@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
Hi!
We have a postgres database which accessed by clients app via PL/PGSQL
stored procedures ( 8.4.1 on x86_64 ubuntu 8.04 server).

For some reasons we use about 25 temp tables "on commit delete rows".
It widely used by our SP.

When temp tables with "on commit delete rows" exists, I can see a
strange delay at any “begin” and “commit”.

2010-03-09 15:14:01 MSK logrus 32102 amber LOG:  duration: 20.809 ms
statement: BEGIN
2010-03-09 15:14:01 MSK logrus 32102 amber LOG:  duration: 0.809 ms
statement: SELECT  empl.BL_CustomerFreeCLGet('384154676925391',
'8189', NULL)
010-03-09 15:14:01 MSK logrus 32102 amber LOG:  duration: 0.283 ms
statement: FETCH ALL IN "<unnamed portal 165>"; --
+++empl.BL_CustomerFreeCLGet+++<<21360>>
2010-03-09 15:14:01 MSK logrus 32102 amber LOG:  duration: 19.895 ms
statement: COMMIT

The more system load and more temp table quantity in session, then
more “begin” and “commit” delays.


Test example below:

create database test;
create language plpgsql;
CREATE OR REPLACE FUNCTION test_connectionprepare(in_create
bool,in_IsTemp bool,in_DelOnCommit bool,in_TableCount int)
 RETURNS boolean AS $$

declare
 m_count int := 50;
 m_isTemp  bool;

begin

m_count := coalesce(in_TableCount,m_count);

FOR i IN 0..m_count LOOP

if in_create then
   execute 'create ' || case when in_IsTemp then ' temp ' else ' ' end
||' table tmp_table_'
             || i::text || '(id int,pid int,name text) '
             || case when in_DelOnCommit then ' on commit delete rows
' else ' ' end || ';';
else
   execute 'drop table if exists tmp_table_' || i::text ||';';
end if;

END LOOP;

 return in_create;
end;
$$  LANGUAGE 'plpgsql' VOLATILE SECURITY DEFINER;
------------------------------------------------------------------------------

Now run pgScript:
DECLARE @I;
SET @I = 1;
WHILE @I <= 100
BEGIN

select now();

   SET @I = @I + 1;
END

It spent about 2200-2300 ms on my server.

Let's create 50 temp tables: select test_connectionprepare(true,true,true,100);

and run script again.

We can see 2-3 times slowing!

Here temp tables quantity vs test run time:

0 - 2157-2187
10 - 2500-2704
50 - 5900-6000
100 - 7900-8000
500 - 43000+

I can to suppose, that all tables are truncated before and after every
transactions. Very strange method for read only transactions!
------------------------------------------------------------------------------

Sorry for my English.

My server info:
"PostgreSQL 8.4.1 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.2.real
(GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu4), 64-bit"
Linux u16 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64 GNU/Linux
4xOpteron 16 processor cores.

Responses

pgsql-performance by date

Next:From: Ireneusz PlutaDate: 2010-03-30 13:20:47
Subject: 3ware vs. MegaRAID
Previous:From: GnanakumarDate: 2010-03-30 11:17:42
Subject: Re: Database size growing over time and leads to performance impact

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group