Extrapolating performance expectation

From: Rob Sargent <robjsargent(at)gmail(dot)com>
To: pgsql-sql(at)postgresql(dot)org
Subject: Extrapolating performance expectation
Date: 2009-05-18 04:04:46
Message-ID: 5c4ddc540905172104t1a91b829tc2848798a31df886@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql

Can one extrapolate future performance expectations for ever-growing tables
from a given (non-trivial) data set, and if so with what curve? Corollary:
what would one expect a performance curve to approximate in terms of query
execution time v. number of data rows (hardware, load staying constant).

I have user and group information on system usage. I would like to be able
to do year-to-date counts per user given a single group id but in the data
for one business quarter the query is taking in between 10 and 60+ seconds
depending on both on the size of the group and the group's total usage.
Groups typically have 10-100 users and consume 20K - 80K records in a 9M
record data set. Group id column is indexed, but it is not the primary
index. (Sad note: two pseudo groups account for 50 percent of the total
records IIRC (and will never be used for the usage-by-group query below)

This is a single table query:

select user_id, element_type, count(*)
from dataset
where group_id = N
group by user_id, element_type
order by user_id, element_type

Is this the sort of situation which might benefit from increasing the number
of histogram bins (alter table alter column statistics (N>10))?

Any and all pointers appreciated,

Responses

Browse pgsql-sql by date

  From Date Subject
Next Message Dani Castaños 2009-05-18 10:45:10 Extracting data from arrays
Previous Message Emi Lu 2009-05-14 18:33:41 Re: alter column from varchar(32) to varchar(255) without view re-creation