Re: Performance Anomalies in 7.4.5

From: "Alban Medici (NetCentrex)" <amedici(at)fr(dot)netcentrex(dot)net>
To: "'Tom Lane'" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "'Thomas F(dot)O'Connell'" <tfo(at)sitening(dot)com>
Cc: "'PgSQL - Performance'" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance Anomalies in 7.4.5
Date: 2004-10-28 08:01:02
Message-ID: 20041028080109.E47A83A4828@svr1.postgresql.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

This topic probably available in 8.x will be very usefull for people just
using postgresql as a "normal" Database user.

-----Original Message-----
From: pgsql-performance-owner(at)postgresql(dot)org
[mailto:pgsql-performance-owner(at)postgresql(dot)org] On Behalf Of Tom Lane
Sent: jeudi 21 octobre 2004 23:53
To: Thomas F.O'Connell
Cc: PgSQL - Performance
Subject: Re: [PERFORM] Performance Anomalies in 7.4.5

"Thomas F.O'Connell" <tfo(at)sitening(dot)com> writes:
> -> Nested Loop (cost=0.00..0.01 rows=1 width=8) (actual
> time=1.771..298305.531 rows=2452 loops=1)
> Join Filter: ("inner".id = "outer".id)
> -> Seq Scan on userdata u (cost=0.00..0.00 rows=1 width=8)
> (actual time=0.026..11.869 rows=2452 loops=1)
> -> Seq Scan on userdata_history h (cost=0.00..0.00 rows=1
> width=8) (actual time=0.005..70.519 rows=41631 loops=2452)
> Filter: (id = 18181::bigint)
> Total runtime: 298321.926 ms
> (7 rows)

What's killing you here is that the planner thinks these tables are
completely empty (notice the zero cost estimates, which implies the table
has zero blocks --- the fact that the rows estimate is 1 and not 0 is the
result of sanity-check clamping inside costsize.c). This leads it to choose
a nestloop, which would be the best plan if there were only a few rows
involved, but it degenerates rapidly when there are not.

It's easy to fall into this trap when truncating and reloading tables; all
you need is an "analyze" while the table is empty. The rule of thumb is to
analyze just after you reload the table, not just before.

I'm getting more and more convinced that we need to drop the reltuples and
relpages entries in pg_class, in favor of checking the physical table size
whenever we make a plan. We could derive the tuple count estimate by having
ANALYZE store a tuples-per-page estimate in pg_class and then multiply by
the current table size; tuples-per-page should be a much more stable figure
than total tuple count.

One drawback to this is that it would require an additional lseek per table
while planning, but that doesn't seem like a huge penalty.

Probably the most severe objection to doing things this way is that the
selected plan could change unexpectedly as a result of the physical table
size changing. Right now the DBA can keep tight rein on actions that might
affect plan selection (ie, VACUUM and ANALYZE), but that would go by the
board with this. OTOH, we seem to be moving towards autovacuum, which also
takes away any guarantees in this department.

In any case this is speculation for 8.1; I think it's too late for 8.0.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message John Meinel 2004-10-28 15:27:03 Re: Sequential Scan with LIMIT
Previous Message Kevin Brown 2004-10-28 03:20:45 Re: [PATCHES] ARC Memory Usage analysis