Thanks for the feedback guys.
The database will grow in size. This first client
years worth of data was 85mb (test to proof of
concept). The 05 datasets I expect to be much larger.
I think I may increase the work_mem and
maintenance_work_mem a bit more as suggested to.
I'm a bit still confused with max_connections.
I've been keeping the max_connections to the # of
Apache connections. Since, this is all currently one
one box and it's a web-based application. I wanted to
make sure it stuck with the same # of connections.
However, is there a formula or way to determine if a
current setup with memory etc to allow such
Exactly how is max_connections determined or is a
Again thanks for your help and Mr. Taylors.
Look forward to providing help when I got more a grasp
on things to !:)
--- John A Meinel <john(at)arbash-meinel(dot)com> wrote:
> Puddle wrote:
> >Hello, I'm a Sun Solaris sys admin for a start-up
> >company. I've got the UNIX background, but now I'm
> >having to learn PostgreSQL to support it on our
> >servers :)
> >Server Background:
> >Solaris 10 x86
> >PostgreSQL 8.0.3
> >Dell PowerEdge 2650 w/4gb ram.
> >This is running JBoss/Apache as well (I KNOW the
> >juju of running it all on one box, but it's all we
> >have currently for this project). I'm dedicating
> >for PostgreSQL alone.
> >So, far I LOVE it compared to MySQL it's solid.
> >The only things I'm kind of confused about (and
> >been searching for answers on lot of good perf
> >but not too clear to me) are the following:
> >1.) shared_buffers I see lot of reference to making
> >this the size of available ram (for the DB).
> >I also read to make it the size of pgdata
> >I notice when I load postgres each daemon is using
> >amount of shared memory (shared_buffers). Our
> >dataset (pgdata) is 85mb in size. So, I'm curious
> >should this size reflect the pgdata or the 'actual'
> >memory given?
> >I currently have this at 128mb
> You generally want shared_buffers to be no more than
> 10% of available
> ram. Postgres expects the OS to do it's own caching.
> 128M/4G = 3% seems
> reasonable to me. I would certainly never set it to
> 100% of ram.
> >2.) effective_cache_size - from what I read this is
> >the 'total' allowed memory for postgresql to use
> >correct? So, if I am willing to allow 1GB of memory
> >should I make this 1GB?
> This is the effective amount of caching between the
> actual postgres
> buffers, and the OS buffers. If you are dedicating
> this machine to
> postgres, I would set it to something like 3.5G. If
> it is a mixed
> machine, then you have to think about it.
> This does not change how postgres uses RAM, it
> changes how postgres
> estimates whether an Index scan will be cheaper than
> a Sequential scan,
> based on the likelihood that the data you want will
> already be cached in
> If you dataset is only 85MB, and you don't think it
> will grow, you
> really don't have to worry about this much. You have
> a very small database.
> >3.) max_connections, been trying to figure 'how' to
> >determine this #. I've read this is
> >per a connection.
> >ie. 128mb(buffer) + 500kb = 128.5mb per
> Max connections is just how many concurrent
> connections you want to
> allow. If you can get away with lower, do so.
> Mostly this is to prevent
> connections * work_mem to get bigger than your real
> working memory and
> causing you to swap.
> >I was curious about 'sort_mem' I can't find
> >of it in the 8.0.3 documentation, has it been
> sort_mem changed to work_mem in 8.0, same thing with
> vacuum_mem ->
> >work_mem and max_stack_depth set to 4096
> >maintenance_work_mem set to 64mb
> Depends how much space you want to give per
> connection. 4M is pretty
> small for a machine with 4G of RAM, but if your DB
> is only 85M it might
> be plenty.
> work_mem is how much memory a sort/hash/etc will use
> before it spills to
> disk. So look at your queries. If you tend to sort
> most of your 85M db
> in a single query, you might want to make it a
> little bit more. But if
> all of your queries are very selective, 4M could be
> I would make maintenance_work_mem more like 512M. It
> is only used for
> CREATE INDEX, VACUUM, etc. Things that are not
> generally done by more
> than one process at a time. And it's nice for them
> to have plenty of
> room to run fast.
> >Thanks for any help on this. I'm sure bombardment
> >newbies gets old :)
> Good luck,
Rekindle the Rivalries. Sign up for Fantasy Football
In response to
pgsql-performance by date
|Next:||From: Yves Vindevogel||Date: 2005-06-24 20:19:26|
|Subject: Fwd: Speed with offset clause|
|Previous:||From: Greg Maples||Date: 2005-06-24 19:49:51|
|Subject: Performance - moving from oracle to postgresql|