Re: slow vacuum performance

From: pginfo <pginfo(at)t1(dot)unisoftbg(dot)com>
To: "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: slow vacuum performance
Date: 2004-03-24 16:49:55
Message-ID: 4061BC33.9CC1E712@t1.unisoftbg.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi,

scott.marlowe wrote:

> On Wed, 24 Mar 2004, pginfo wrote:
>
> > Hi,
> >
> > I am running pg 7.4.1 on linux box.
> > I have a midle size DB with many updates and after it I try to run
> > vacuum full analyze.
>
> Is there a reason to not use just regular vacuum / analyze (i.e. NOT
> full)?
>

Yes, in case I make massive updates (only in my case of cource) for example
2 M rows, I do not expect to have 2M new rows in next 180 days.That is the
reaso for running vacuum full.
My idea was to free unneedet space and so to have faster system.
It is possible that I am wrong.

> > It takes about 2 h.
>
> Full vacuums, by their nature, tend to be a bit slow. It's better to let
> the database achieve a kind of "steady state" with regards to number of
> dead tuples, and use regular vacuums to reclaim said space rather than a
> full vacuum.
>
> > How can I improve the vacuum full analyze time?
> >
> > My configuration:
> >
> > shared_buffers = 15000 # min 16, at least max_connections*2,
> > 8KB each
> > sort_mem = 10000 # min 64, size in KB
>
> You might want to look at dropping sort_mem. It would appear you've been
> going through the postgresql.conf file and bumping up numbers to see what
> works and what doesn't. While most of the settings aren't too dangerous
> to crank up a little high, sort_mem is quite dangerous to crank up high,
> should you have a lot of people connected who are all sorting. Note that
> sort_mem is a limit PER SORT, not per backend, or per database, or per
> user, or even per table, but per sort. IF a query needs to run three or
> four sorts, it can use 3 or 4x sort_mem. If a hundred users do this at
> once, they can then use 300 or 400x sort_mem. You can see where I'm
> heading.
>
> Note that for individual sorts in batch files, like import processes, you
> can bump up sort_mem with the set command, so you don't have to have a
> large setting in postgresql.conf to use a lot of sort mem when you need
> to, you can just grab it during that one session.
>

I know. In my case we are using many ID's declared as varchar/name (I know it
is bad idea, butwe are migrating this system from oracle) and pg have very
bad performance with varchar/name indexes.
The only solution I found was to increase the sort mem.
But, I wll try to decrease this one and to see the result.

> > vacuum_mem = 32000 # min 1024, size in KB
>
> If you've got lots of memory, crank up vacuum_mem to the 200 to 500 meg
> range and see what happens.
>

I wil try it today. It is good idea and hope it will help.

> For a good tuning guide, go here:
>
> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html

I know it. It is the best I found and also the site.

Thanks for the help.
ivan.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Vivek Khera 2004-03-24 17:05:49 Re: [ADMIN] Benchmarking postgres on Solaris/Linux
Previous Message Richard Huxton 2004-03-24 16:44:39 Re: Help with query plan inconsistencies