Re: speeding up a query on a large table

From: Mike Rylander <mrylander(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: speeding up a query on a large table
Date: 2005-08-17 21:55:51
Message-ID: b918cf3d05081714557259d7eb@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 8/17/05, Manfred Koizar <mkoi-pg(at)aon(dot)at> wrote:
> On Mon, 25 Jul 2005 17:50:55 -0400, Kevin Murphy
> <murphy(at)genome(dot)chop(dot)edu> wrote:
> > and because the number of possible search terms is so large, it
> >would be nice if the entire index could somehow be preloaded into memory
> >and encouraged to stay there.
>
> Postgres does not have such a feature and I wouldn't recommend to mess
> around inside Postgres. You could try to copy the relevant index
> file(s) to /dev/null to populate the OS cache ...

That actually works fine. When I had big problems with a large GiST
index I just used cat to dump it at /dev/null and the OS grabbed it.
Of course, that was on linux so YMMV.

--
Mike Rylander
mrylander(at)gmail(dot)com
GPLS -- PINES Development
Database Developer
http://open-ils.org

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Dr NoName 2005-08-17 22:40:25 test
Previous Message Edmund 2005-08-17 21:54:40 Re: Generating random values.