Re: Help tuning a large table off disk and into RAM

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Bill Moran <wmoran(at)potentialtech(dot)com>
Cc: "James Williams" <james(dot)wlms(at)googlemail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Help tuning a large table off disk and into RAM
Date: 2007-09-26 16:20:52
Message-ID: 22540.1190823652@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Bill Moran <wmoran(at)potentialtech(dot)com> writes:
> Give it enough shared_buffers and it will do that. You're estimating
> the size of your table @ 3G (try a pg_relation_size() on it to get an
> actual size) If you really want to get _all_ of it in all the time,
> you're probably going to need to add RAM to the machine.

The table alone will barely fit in RAM, and he says he's got a boatload
of indexes too; and apparently Postgres isn't the only thing running on
the machine. He *definitely* has to buy more RAM if he wants it all
to fit. I wouldn't necessarily advise going to gigs of shared buffers;
you'll be putting a lot of temptation on the kernel to swap parts of
that out, and it does not sound at all like the workload will keep all
of the buffers "hot" enough to prevent that.

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2007-09-26 16:27:16 Re: Poor performance with ON DELETE CASCADE
Previous Message Gregory Stark 2007-09-26 15:54:47 Re: DAGs and recursive queries