Re: Huge Data sets, simple queries

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Mike Biamonte" <mike(at)dbeat(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Huge Data sets, simple queries
Date: 2006-01-28 15:55:02
Message-ID: 11814.1138463702@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Mike Biamonte" <mike(at)dbeat(dot)com> writes:
> The queries I need to run on my 200 million transactions are relatively
> simple:

> select month, count(distinct(cardnum)) count(*), sum(amount) from
> transactions group by month;

count(distinct) is not "relatively simple", and the current
implementation isn't especially efficient. Can you avoid that
construct?

Assuming that "month" means what it sounds like, the above would result
in running twelve parallel sort/uniq operations, one for each month
grouping, to eliminate duplicates before counting. You've got sortmem
set high enough to blow out RAM in that scenario ...

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Jeffrey W. Baker 2006-01-28 17:08:53 Re: Huge Data sets, simple queries
Previous Message Jeffrey W. Baker 2006-01-28 06:50:00 Re: Huge Data sets, simple queries