Re: Partitions and max_locks_per_transaction

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Hrishikesh (हृषीकेश मेहेंदळे) <hashinclude(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Partitions and max_locks_per_transaction
Date: 2009-11-20 07:08:10
Message-ID: 16208.1258700890@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?= =?UTF-8?B?4KWHKQ==?= <hashinclude(at)gmail(dot)com> writes:
> To make make the retrieval faster, I'm using a
> partitioning scheme as follows:

> stats_300: data gathered at 5 mins, child tables named stats_300_t1_t2
> (where t2 - t1 = 2 hrs), i.e. 12 tables in one day
> stats_3600: data gathered / calculated over 1 hour, child tables
> similar to the above - stats_3600_t1_t2, where (t2 - t1) is 2 days
> (i.e. 15 tables a month)
> stats_86400: data gathered / calculated over 1 day, stored as
> stats_86400_t1_t2 where (t2 - t1) is 30 days (i.e. 12 tables a year).

So you've got, um, something less than a hundred rows in any one child
table? This is carrying partitioning to an insane degree, and your
performance is NOT going to be improved by it.

I'd suggest partitioning on boundaries that will give you order of a
million rows per child. That could be argued an order of magnitude or
two either way, but what you've got is well outside the useful range.

> I'm running into the error "ERROR: out of shared memory HINT: You
> might need to increase max_locks_per_transaction.

No surprise given the number of tables and indexes you're forcing
the system to deal with ...

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Lorenzo Allegrucci 2009-11-20 09:43:40 Strange performance degradation
Previous Message Richard Neill 2009-11-20 07:03:04 Re: Postgres query completion status?