On Wed, 2007-01-17 at 08:37, Jeremy Haile wrote:
> > I still keep wondering if this table is bloated with dead tuples. Even
> > if you vacuum often if there's a connection with an idle transaction,
> > the tuples can't be reclaimed and the table would continue to grow.
> I used to vacuum once an hour, although I've switched it to autovacuum
> now. It definitely could be bloated with dead tuples. I'll paste the
> "vacuum analyze verbose" output at the bottom of this e-mail. Would a
> vacuum full be a good idea?
> > Assuming the table's NOT bloated, you may do well to increase the
> > effective_cache_size, which doesn't allocate anything,
> > try setting it to something like 512MB or so.
> It's currently set to 1000MB.
Sounds about right for a machine with 2G memory (as you mentioned
> > If your table is bloating, and you don't have idle transactions hanging
> > of the database, it could be that your fsm settings are too low.
> fsm is currently set to 2000000. Is there any harm in setting it too
> high? =)
As long as you don't run out of memory, it can be pretty high. note
that it uses shared memory, so setting it too high can cause the db to
fail to start.
> INFO: scanned index "transaction_facts_pkey" to remove 759969 row
> DETAIL: CPU 7.20s/2.31u sec elapsed 315.31 sec.
> INFO: scanned index "transaction_facts_dcms_dim_id_idx" to remove
> 759969 row versions
> DETAIL: 759969 index row versions were removed.
> 0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO: "transaction_facts": found 759969 removable, 2274280 nonremovable
> row versions in 308142 pages
> DETAIL: 0 dead row versions cannot be removed yet.
> There were 15710471 unused item pointers.
> 266986 pages contain useful free space.
> 0 pages are entirely empty.
> CPU 58.00s/35.59u sec elapsed 3240.94 sec.
That's about 32% dead rows. Might be worth scheduling a vacuum full,
but it's not like I was afraid it might be. It looks to me like you
could probably use a faster I/O subsystem in that machine though.
If the random page cost being lower fixes your issues, then I'd just run
with it lower for now. note that while lowering it may fix one query,
it may break another. Tuning pgsql, like any database, is as much art
In response to
pgsql-performance by date
|Next:||From: Jeremy Haile||Date: 2007-01-17 16:28:01|
|Subject: Re: PG8.2.1 choosing slow seqscan over idx scan|
|Previous:||From: Dave Dutcher||Date: 2007-01-17 15:57:16|
|Subject: Raid 10 or Raid 5 on Dell PowerEdge|