"Ismail Kizir" <ikizir(at)tumgazeteler(dot)com> writes:
> I have a database of 20 tables, ~1gb total size. My biggest table contains
> ~270,000 newspaper article from Turkish journals. I am actually working on
> "fulltext search" program of my own.
How much RAM does the machine have? Have you already executed the query and
are repeating it? It's likely the entire data set is cached in RAM. That's not
the long-term average as your data set grows.
The numbers there are appropriate for a database where the data being fetched
cannot all fit in RAM and isn't all pre-cached. There are also scenarios where
the algorithms the optimizer uses to estimate costs don't capture everything.
tweaking the parameters to correct for these problems would cause other
queries to be handled even worse.
If anything the penalty for random disk accesses has increased over the years.
My desktop is about 100 times faster than my 486 router. But the hard drive in
the 486 is only about 10x slower than the hard drive in the desktop. And the
ratio of seek times is probably even less.
There is a parameter effective_cache_size which is supposed to help Postgres
take into account the likelihood that the data will already be in cache. How
exactly does this affect planning and perhaps this parameter needs to have
much more impact on the resultant plans. At least for databases that are small
relative to it.
In response to
pgsql-hackers by date
|Next:||From: Bruce Momjian||Date: 2004-05-25 20:48:25|
|Subject: Re: New horology failure|
|Previous:||From: Greg Stark||Date: 2004-05-25 20:04:13|
|Subject: Re: Timezone fun (bugs and a request)|