Skip site navigation (1) Skip section navigation (2)

tables with 300+ partitions

From: Pablo Alcaraz <pabloa(at)laotraesquina(dot)com(dot)ar>
To: pgsql-performance(at)postgresql(dot)org
Subject: tables with 300+ partitions
Date: 2007-10-30 18:09:05
Message-ID: 47277341.7070704@laotraesquina.com.ar (view raw or flat)
Thread:
Lists: pgsql-performance
Hi All!

I had a big big big table. I tried to divide it in 300 partitions with 
30M rows each one. The problem was when I used the table to insert 
information: the perfomance was LOW.

I did some testing. I created a 300 partitioned empty table. Then, I 
inserted some rows on it and the perfomance was SLOW too.

SLOW = 1% perfomance compared with a non partitioned table. That is too 
much.

Then, I did a 10 partitioned table version with 30M rows each one and I 
inserted rows there. The performance was the same that the no 
partitioned table version.

I suspect there is a lock problem there. I think every SQL command do a 
lock to ALL the partitions so the perfomance with concurrent inserts and 
updates are far worst than the no partitioned version.

The perfomace degrade with the number of partitions. And it degrade 
fast: I have troubles with 40 partitions.

Am I right? is there a workaround? Can I replace the partitioned version 
with another schema? any suggestion? I prefer to use something 
transparent for the program because it uses EJB3 = deep changes and 
testing on any change to the database layer.


Regards

Pablo Alcaraz


Responses

pgsql-performance by date

Next:From: Heikki LinnakangasDate: 2007-10-30 18:56:50
Subject: Re: tables with 300+ partitions
Previous:From: Ketema HarrisDate: 2007-10-30 16:41:00
Subject: Re: Improving Query

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group