From: | Bruno Wolff III <bruno(at)wolff(dot)to> |
---|---|
To: | Bill <bill(at)math(dot)uchicago(dot)edu> |
Cc: | Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: postgresql and openmosix migration |
Date: | 2004-06-22 17:53:28 |
Message-ID: | 20040622175328.GA20086@wolff.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Jun 22, 2004 at 12:31:15 -0500,
Bill <bill(at)math(dot)uchicago(dot)edu> wrote:
> Ok, so maybe someone on this group will have a better idea. We have a
> database of financial information, and this has literally millions of
> entries. I have installed indicies, but for the rather computationally
> demanding processes we like to use, like a select query to find the
> commodity with the highest monthly or annual returns, the computer generally
> runs unacceptably slow. So, other than clustring, how could I achieve a
> speed increase in these complex queries? Is this better in mysql or
> postgresql?
Queries using max (or min) can often be rewritten as queries using ORDER BY
and LIMIT so that they can take advantage of indexes. Doing this might help
with some of the problems you are seeing.
If you commonly query on aggregated data it might be better to create
derived tables of the aggregated data maintained by triggers, and query
against them. If you do lots of selects relative to inserts and updates,
this could be a big win.
From | Date | Subject | |
---|---|---|---|
Next Message | jason.servetar | 2004-06-22 17:58:39 | Re: postgresql and openmosix migration |
Previous Message | Bill | 2004-06-22 17:31:15 | Re: postgresql and openmosix migration |