Re: Prefetch

From: Mischa Sandberg <mischa(dot)sandberg(at)telus(dot)net>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Prefetch
Date: 2005-05-11 19:06:56
Message-ID: 1115838416.428257d0e5bc3@webmail.telus.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Quoting Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>:

> > Another trick you can use with large data sets like this when you
> want
> > results
> > back in seconds is to have regularly updated tables that aggregate
> the data
> > along each column normally aggregated against the main data set.
>
> > Maybe some bright person will prove me wrong by posting some
> working
> > information about how to get these apparently absent features
> working.
>
> Most people just use simple triggers to maintain aggregate summary
> tables...

Don't know if this is more appropriate to bizgres, but:
What the first poster is talking about is what OLAP cubes do.

For big aggregating systems (OLAP), triggers perform poorly,
compared to messy hand-rolled code. You may have dozens
of aggregates at various levels. Consider the effect of having
each detail row cascade into twenty updates.

It's particularly silly-looking when data is coming in as
batches of thousands of rows in a single insert, e.g.

COPY temp_table FROM STDIN;
UPDATE fact_table ... FROM ... temp_table
INSERT INTO fact_table ...FROM...temp_table

(the above pair of operations is so common,
Oracle added its "MERGE" operator for it).

Hence my recent post (request) for using RULES to aggregate
--- given no luck with triggers "FOR EACH STATEMENT".

In response to

  • Re: Prefetch at 2005-05-11 04:53:05 from Christopher Kings-Lynne

Browse pgsql-performance by date

  From Date Subject
Next Message Guillaume Smet 2005-05-11 19:32:11 Re: Bad plan after vacuum analyze
Previous Message Tom Lane 2005-05-11 18:58:46 Re: Bad plan after vacuum analyze