Re: My own performance/tuning q&a

From: Allen Landsidel <all(at)biosys(dot)net>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: My own performance/tuning q&a
Date: 2003-10-25 00:11:52
Message-ID: 6.0.0.22.0.20031024194443.024306b8@pop.hotpop.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Pardon this for looking somewhat "weird" but it seems I'm not getting all
the messages to the list.. I've noticed the past few days a lot of them are
coming out of order as well..

So, this was copy/pasted from the web archive of the list..

Vivek Khera wrote:
> >>>>> "AL" == Allen Landsidel <all ( at ) biosys ( dot ) net> writes:
>
>AL> maxusers 256
> >> let the system autoconfigure maxusers...
>
>AL> Are you sure about this? I have always understood that explicitly
>
>Yes, recent freebsd kernels autosize various tables and limits based
>on existing RAM. It does pretty well.

I'll disable it then and see how it goes.

>AL> These are related to something else.. a linux developer on the system
>AL> used to the way it'll always allow you access to all the memory on a
>
>Ahhh... I guess we don't believe in multi-user systems ;-)

No, that's a foreign concept to a lot of people it seems. As a matter of
trivia, I first suggested we run this on another server instead and hit the
db remotely, as it's only going to be a "run once" type of thing that
converts the old system to the new one but was rebuffed. Yesterday during
a test run the thing ran over the 1GB limit, failed on some new() or other
and dumped core. I couldn't bring the db down at that time to update the
kernel, so we ran it on another box that has MAXDSIZ set to 1.5GB and it
ran ok, but took about six hours.. so I'll be upping the that value yet
again for this one special run this weekend when we do the *real* switch
over, then putting it back down once we're all done.

I can deal with it since it's not going to be "normal" but simply a one-off
type thing.

FWIW the same kind of thing has happened to me with this postgres install;
Occasionally large atomic queries like DELETE will fail for the same reason
(out of memory) if there are a lot of rows to get removed, and TRUNCATE
isn't an option since there are FKs on the table in question. This is an
annoyance I'd be interested to hear how other people work around, but only
a minor one.

>I use 262144 for SHMMAXPGS and SHMALL. I also use about 30000 shared
>buffers.

I believe I had it fairly high once before and didn't notice much of an
improvement but I'll fool with numbers around where you suggest.

>AL> I'll see if sysctl lets me write this value, or if it's a kernel
>AL> config option I missed, unless you have remembered between then and
>
>you need to bump some header file constant and rebuild the kernel. it
>also increases the granularity of how the buffer cache is used, so I'm
>not sure how it affects overall system. nothing like an experiment...

So far I've found a whole lot of questions about this, but nothing about
the constant. The sysctl (vfs.hibufspace I believe is the one) is read
only, although I should be able to work around that via /boot/loader.conf
if I can't find the kernel option.

>AL> Given this and the above about the controllers onboard cache (not to
>AL> mention the per-drive cache) do you think I'll still need to lower
>AL> effective_cache_size?
>
>It is hard to say. If you tell PG you have more than you do, I don't
>know what kind of decisions it will make incorrectly. I'd rather be
>conservative and limit it to the RAM that the system says it will
>use. The RAM in the controller is not additive to this -- it is
>redundant to it, since all data goes thru that cache into the main
>memory.

A very good point, I don't know why I thought they may hold different
data. I think it could differ somewhat but probably most everything in the
controller cache will be duplicated in the OS cache, provided the OS cache
is at least as large.

A separate reply concatenated here to a message I actually did get
delivered via email:

At 16:50 10/24/2003, Bruce Momjian wrote:
>Vivek Khera wrote:
> > >>>>> "sm" == scott marlowe <scott.marlowe> writes:
> >
> >
> > sm> Note that Tom has mentioned problems with possible deadlocks when
> nicing
> > sm> individual backends before, so proceed with caution here.
> >
> > I can see possible starvation, but if scheduling changes cause
> > deadlocks, then there's something wrong with the design.
>
>Yes, I think Tom's concern was priority inversion, where a low priority
>process holds a lock while a higher one waits for it.

1. Vivek, you were absolutely right about the backend process not being
lowered in priority by nice'ing the psql. Yet another thing that "just
didn't occur" when I wrote the script.

2. Vivek and Bruce (and even Tom), "VACUUM ANALYZE (VERBOSE)" isn't
supposed to lock anything though, right? I can see this being a possible
problem for other queries that do lock things, but between Vivek pointing
out that the nice isn't *really* affecting the vacuum (as I just run one
query db-wide) and the fact that the vacuum doesn't lock, I don't think
it's hurting (or helping) in this case.

However, I do the same thing with the reindex, so I'll definitely be taking
it out there, as that one does lock.. although I would think the worst this
would do would be a making the index unavailable and forcing a seq scan..
is that not the case?

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message John Pagakis 2003-10-25 00:17:44 Re: Performance Concern
Previous Message Greg Stark 2003-10-25 00:07:57 Re: vacuum locking