It was Thursday 19 November 2009 11:08:10 pm that the wise Tom Lane thus
> <hashinclude(at)gmail(dot)com> writes:
> > To make make the retrieval faster, I'm using a
> > partitioning scheme as follows:
> > stats_300: data gathered at 5 mins, child tables named stats_300_t1_t2
> > (where t2 - t1 = 2 hrs), i.e. 12 tables in one day
> > stats_3600: data gathered / calculated over 1 hour, child tables
> > similar to the above - stats_3600_t1_t2, where (t2 - t1) is 2 days
> > (i.e. 15 tables a month)
> > stats_86400: data gathered / calculated over 1 day, stored as
> > stats_86400_t1_t2 where (t2 - t1) is 30 days (i.e. 12 tables a year).
> So you've got, um, something less than a hundred rows in any one child
> table? This is carrying partitioning to an insane degree, and your
> performance is NOT going to be improved by it.
Sorry I forgot to mention - in the "normal" case, each of those tables will
have a few hundred thousand records, and in the worst case (the tables store
info on up to 2000 endpoints) it can be around 5 million.
Also, the partitioning is not final yet (we might move it to 6 hours / 12
hours per partition) - which is why I need to run the load test :)
> I'd suggest partitioning on boundaries that will give you order of a
> million rows per child. That could be argued an order of magnitude or
> two either way, but what you've got is well outside the useful range.
> > I'm running into the error "ERROR: out of shared memory HINT: You
> > might need to increase max_locks_per_transaction.
> No surprise given the number of tables and indexes you're forcing
> the system to deal with ...
How many locks per table/index does PG require? Even with my current state
(<50 tables, < 250 (tables + indexes)) is it reasonable to expect 2000 locks
to run out?
In response to
pgsql-performance by date
|Next:||From: Jeff Janes||Date: 2009-11-20 16:47:01|
|Subject: Re: SSD + RAID|
|Previous:||From: Віталій Тимчишин||Date: 2009-11-20 16:34:09|
|Subject: Re: View based upon function won't use index on joins|