> -----Original Message-----
> From: cluster
> >> Probably by buying much faster disk hardware.
> > Or buy more RAM, so that the data can stay cached.
> So the only problem here is lack of RAM and/or disk speed?
I don't think you can reach that conclusion yet. Like everybody said the
reason the query was faster the second time was that the disk pages were
cached in RAM, and pulling the data out of RAM is way faster than disk. If
I were you, I would try to optimize the query for when the disk pages aren't
in RAM. In order to test the query without having anything cached you need
to clear out Postgres's shared buffers and the OS cache. That can be
tricky, but it may be as easy as running a big select on another table.
As for optimizing the query, I noticed that all three joins are done by
nested loops. I wonder if another join method would be faster. Have you
analyzed all the tables? You aren't disabling hash joins or merge joins are
you? If you aren't, then as a test I would try disabling nested loops by
doing "set enable_nestloop=false" and see if the query is any faster for
you. If it is faster without nested loops, then you might need to look into
changing some settings.
In response to
pgsql-performance by date
|Next:||From: Peter Koczan||Date: 2007-11-27 20:18:51|
|Subject: Re: TB-sized databases|
|Previous:||From: Bill Moran||Date: 2007-11-27 18:55:50|
|Subject: Re: Query only slow on first run|