From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Jessica Richard" <rjessil(at)yahoo(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: how to estimate shared_buffers... |
Date: | 2008-07-12 13:05:59 |
Message-ID: | dcc563d10807120605j7dbfe12ek4dc6b7cfdfdee0a5@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Sat, Jul 12, 2008 at 5:30 AM, Jessica Richard <rjessil(at)yahoo(dot)com> wrote:
> On a running production machine, we have 900M configured on a 16G-memory
> Linux host. The db size for all dbs combined is about 50G. There are many
> transactions going on all the times (deletes, inserts, updates). We do not
> have a testing environment that has the same setup and the same amount of
> workload. I want to evaluate on the production host if this 900M is enough.
> If not, we still have room to go up a little bit to speed up all Postgres
> activities. I don't know enough about the SA side. I just would imagine, if
> something like "top" command or other tools can measure how much total
> memory Postgres is actually using (against the configured 900M shared
> buffers), and if Postgres is using almost 900M all the time, I would take
> this as an indication that the shared_buffers can go up for another 100M...
>
> What is the best way to tell how much memory Postgres (all Postgres related
> things) is actually using?
If you've got a 50G data set, then postgresql is most likely using
whatever memory you give it for shared buffers. top should show that
easily.
I'd say start at 25% ~ 4G (this is a 64 bit machine, right?). That
leaves plenty of memory for the OS to cache data, and for postgresql
to allocate work_mem type stuff from.
From | Date | Subject | |
---|---|---|---|
Next Message | Oleg Bartunov | 2008-07-12 14:30:44 | Re: how to estimate shared_buffers... |
Previous Message | Jessica Richard | 2008-07-12 12:03:13 | best starting point... |