tables with 300+ partitions

From: Pablo Alcaraz <pabloa(at)laotraesquina(dot)com(dot)ar>
To: pgsql-performance(at)postgresql(dot)org
Subject: tables with 300+ partitions
Date: 2007-10-30 18:09:05
Message-ID: 47277341.7070704@laotraesquina.com.ar
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi All!

I had a big big big table. I tried to divide it in 300 partitions with
30M rows each one. The problem was when I used the table to insert
information: the perfomance was LOW.

I did some testing. I created a 300 partitioned empty table. Then, I
inserted some rows on it and the perfomance was SLOW too.

SLOW = 1% perfomance compared with a non partitioned table. That is too
much.

Then, I did a 10 partitioned table version with 30M rows each one and I
inserted rows there. The performance was the same that the no
partitioned table version.

I suspect there is a lock problem there. I think every SQL command do a
lock to ALL the partitions so the perfomance with concurrent inserts and
updates are far worst than the no partitioned version.

The perfomace degrade with the number of partitions. And it degrade
fast: I have troubles with 40 partitions.

Am I right? is there a workaround? Can I replace the partitioned version
with another schema? any suggestion? I prefer to use something
transparent for the program because it uses EJB3 = deep changes and
testing on any change to the database layer.

Regards

Pablo Alcaraz

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Heikki Linnakangas 2007-10-30 18:56:50 Re: tables with 300+ partitions
Previous Message Ketema Harris 2007-10-30 16:41:00 Re: Improving Query