Re: select count() out of memory

From: Paul Boddie <paul(at)boddie(dot)org(dot)uk>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: select count() out of memory
Date: 2007-10-27 00:00:38
Message-ID: 1193443238.580060.312510@19g2000hsx.googlegroups.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 25 Okt, 17:36, tfinn(dot)(dot)(dot)(at)student(dot)matnat(dot)uio(dot)no wrote:
>
> The design is based on access patterns, i.e. one partition represents a
> group of data along a discrete axis, so the partitions are the perfect for
> modeling that. Only the last partition will be used on normal cases. The
> previous partitions only need to exists until the operator deletes them,
> which will be sometime between 1-6 weeks.

This has been interesting reading because I'm working on a system
which involves a more batch-oriented approach in loading the data,
where I've found partitions to be useful both from a performance
perspective (it looks like my indexes would be inconveniently big
otherwise for the total volume of data) and from an administrative
perspective (it's convenient to control the constraints for discrete
subsets of my data). However, if all but the most recent data remains
relatively stable, why not maintain your own statistics for each
partition or, as someone else suggested, use the pg_class statistics?

I'd just be interested to hear what the best practices are when tables
get big and where the access patterns favour the most recently loaded
data and/or reliably identifiable subsets of the data, as they seem to
in this case and in my own case. The various tuning guides out there
have been very useful, but isn't there a point at which partitioning
is inevitable?

Paul

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Pat Maddox 2007-10-27 02:25:17 Re: Selecting tree data
Previous Message Erik Jones 2007-10-26 23:56:38 Re: WAL archiving idle database