Skip site navigation (1) Skip section navigation (2)

Re: Performance query about large tables, lots of concurrent access

From: Karl Wright <kwright(at)metacarta(dot)com>
To: Shaun Thomas <sthomas(at)leapfrogonline(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance query about large tables, lots of concurrent access
Date: 2007-06-20 21:29:41
Message-ID: 46799C45.7080805@metacarta.com (view raw or flat)
Thread:
Lists: pgsql-performance
Shaun Thomas wrote:
> On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
> 
> 
>>I am afraid that I did answer this.  My largest tables
>>are the ones continually being updated.  The smaller
>>ones are updated only infrequently. 
> 
> 
> You know, it actually sounds like you're getting whacked by the same 
> problem that got us a while back.  It sounds like you weren't vacuuming 
> frequently enough initially, and then tried vacuuming later, only after 
> you noticed performance degrade.
> 
> Unfortunately what that means, is for several weeks or months, Postgres 
> has not been reusing rows on your (admittedly) active and large tables; 
> it just appends at the end, and lets old rows slowly bloat that table 
> larger and larger.  Indexes too, will suffer from dead pages.  As 
> frightening/sickening as this sounds, you may need to dump/restore the 
> really huge table, or vacuum-full to put it on a crash diet, and then 
> maintain a strict daily or bi-daily vacuum schedule to keep it under 
> control.
> 

A nice try, but I had just completed a VACUUM on this database three 
hours prior to starting the VACUUM that I gave up on after 27 hours.  So 
I don't see how much more frequently I could do it.  (The one I did 
earlier finished in six hours - but to accomplish that I had to shut 
down EVERYTHING else that machine was doing.)

Karl


> The reason I think this: even with several 200M row tables, vacuums 
> shouldn't take over 24 hours.  Ever.  Do a vacuum verbose and see just 
> how many pages it's trying to reclaim.  I'm willing to wager it's 
> several orders of magnitude higher than the max_fsm_pages setting 
> you've stuck in your config file.
> 
> You'll also want to see which rows in your 250M+ table are actually 
> active, and shunt the stable rows to another (warehouse) table maybe 
> available only via view or table partition.  I mean, your most active 
> table is also the largest?  Seems a bit backward, to me.
> 


In response to

Responses

pgsql-performance by date

Next:From: Andrew SullivanDate: 2007-06-20 21:40:24
Subject: Re: Performance query about large tables, lots of concurrent access
Previous:From: Tom LaneDate: 2007-06-20 21:08:41
Subject: Re: Maintenance question / DB size anomaly...

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group