Re: : Tracking Full Table Scans

From: Craig Ringer <ringerc(at)ringerc(dot)id(dot)au>
To: Venkat Balaji <venkat(dot)balaji(at)verse(dot)in>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, PGSQL Performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: : Tracking Full Table Scans
Date: 2011-09-28 00:55:12
Message-ID: 4E827070.4040801@ringerc.id.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 09/28/2011 12:26 AM, Venkat Balaji wrote:
> Thanks a lot Kevin !!
>
> Yes. I intended to track full table scans first to ensure that only
> small tables or tables with very less pages are (as you said) getting
> scanned full.

It can also be best to do a full table scan of a big table for some
queries. If the query needs to touch all the data in a table - for
example, for an aggregate - then the query will often complete fastest
and with less disk use by using a sequential scan.

I guess what you'd really want to know is to find out about queries that
do seqscans to match relatively small fractions of the total tuples
scanned, ie low-selectivity seqscans. I'm not sure whether or not it's
possible to gather this data with PostgreSQL's current level of stats
detail.

--
Craig Ringer

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message anthony.shipman 2011-09-28 05:13:06 Re: overzealous sorting?
Previous Message Craig Ringer 2011-09-28 00:37:15 Re: postgres constraint triggers