Re: Getting rid of a seq scan in query on a large table

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Jens Hoffrichter" <jens(dot)hoffrichter(at)gmail(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Getting rid of a seq scan in query on a large table
Date: 2011-06-27 14:30:23
Message-ID: 4E084DAF020000250003EC54@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Jens Hoffrichter <jens(dot)hoffrichter(at)gmail(dot)com> wrote:

> I'm having trouble getting rid of a sequential scan on a table
> with roughly 120k entries it.

Please post your configuration information and some information
about your hardware and OS.

http://wiki.postgresql.org/wiki/SlowQueryQuestions

Since the table scan went through about 120000 rows in 60 ms, it is
clear that your data is heavily cached, so random_page_cost should
probably be close to or equal to seq_page_cost, and that value
should probably be somewhere around 0.1 to 0.5. You should have
effective_cache_size set to the sum of shared_buffers plus whatever
your OS cache is. I have sometimes found that I get faster plans
with cpu_tuple_cost increased.

If such tuning does cause it to choose the plan you expect, be sure
to time it against what you have been getting. If the new plan is
slower, you've taken the adjustments too far.

-Kevin

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Kevin Grittner 2011-06-27 14:37:53 Re: Performance issue with Insert
Previous Message Jenish 2011-06-27 14:22:58 Performance issue with Insert