Skip site navigation (1) Skip section navigation (2)

Re: Performance With Joins on Large Tables

From: "Jim C(dot) Nasby" <jim(at)nasby(dot)net>
To: Joshua Marsh <icub3d(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance With Joins on Large Tables
Date: 2006-09-13 06:41:52
Message-ID: 20060913064152.GC23173@nasby.net (view raw or flat)
Thread:
Lists: pgsql-performance
On Tue, Sep 12, 2006 at 04:17:34PM -0600, Joshua Marsh wrote:
> data=# EXPLAIN SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM
> v.custdate), EXTRACT (YEAR FROM s.custdate) || '-' || EXTRACT (MONTH
> FROM s.custdate) FROM view_505 v INNER JOIN r3s169 s ON v.dsiacctno =
> s.dsiacctno;
>                                          QUERY PLAN
> -----------------------------------------------------------------------------------------------
> Merge Join  (cost=293767607.69..305744319.52 rows=285392608 width=11)
>   Merge Cond: ("outer".dsiacctno = "inner".dsiacctno)
>   ->  Sort  (cost=127304933.87..127585815.71 rows=112352736 width=20)
>         Sort Key: v.dsiacctno
>         ->  Seq Scan on view_505 v  (cost=100000000.00..104604059.36
> rows=112352736 width=20)
>   ->  Sort  (cost=166462673.82..167176155.34 rows=285392608 width=17)
>         Sort Key: s.dsiacctno
>         ->  Seq Scan on r3s169 s  (cost=100000000.00..106875334.08
> rows=285392608 width=17)
> (8 rows)
> 
> 
> Since enable_seqscan is off, my understanding is that in order for the query
> planner to user a sequential scan it must think there is no other 
> alternative.
> Both sides are indexed and anaylzed, so that confuses me a little.
> 
> I tried it on a smaller sample set of the data and it works fine:

Actually, enable_seqscan=off just adds a fixed overhead to the seqscan
cost estimate. That's why the cost for the seqscans in that plan starts
at 100000000. I've suggested changing that to a variable overhead based
on the expected rowcount, but the counter-argument was that anyone with
so much data that the fixed amount wouldn't work would most likely be
having bigger issues anyway.

Other things you can try to get the index scan back would be to reduce
random_page_cost and to analyze the join fields in those tables with a
higher statistics target (though I'm not 100% certain the join cost
estimator actually takes that into account). Or if you don't mind
patching your source code, it wouldn't be difficult to make
enable_seqscan use a bigger 'penalty value' than 10000000.
-- 
Jim Nasby                                            jim(at)nasby(dot)net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

In response to

Responses

pgsql-performance by date

Next:From: Laszlo NagyDate: 2006-09-13 10:16:36
Subject: Re: Poor performance on seq scan
Previous:From: Jim C. NasbyDate: 2006-09-13 06:35:03
Subject: Re:

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group