Re: Query help

From: Scott Carey <scott(at)richrelevance(dot)com>
To: Subbiah Stalin-XCGF84 <SSubbiah(at)Motorola(dot)com>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Query help
Date: 2009-08-05 19:30:38
Message-ID: C69F2BEE.E328%scott@richrelevance.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 8/5/09 12:16 PM, "Subbiah Stalin-XCGF84" <SSubbiah(at)Motorola(dot)com> wrote:

> We have found the problem. Apparently there was a query doing count on
> 45 million rows table run prior to the episode of slow query. Definitely
> cached data is pushed out the memory. Is there way to assign portion of
> memory to recycling purposes like in oracle, so the cached data doesn't
> get affected by queries like these.
>
> Stalin

In Postgres 8.3 and above, large sequential scans don't evict other things
from shared_buffers. But they can push things out of the OS page cache.

>
> -----Original Message-----
> From: Kevin Grittner [mailto:Kevin(dot)Grittner(at)wicourts(dot)gov]
> Sent: Tuesday, August 04, 2009 8:57 AM
> To: Subbiah Stalin-XCGF84; pgsql-performance(at)postgresql(dot)org
> Subject: RE: [PERFORM] Query help
>
> "Subbiah Stalin-XCGF84" <SSubbiah(at)Motorola(dot)com> wrote:
>
>> Server has 32G memory and it's a dedicated to run PG and no other
>> application is sharing this database.
>
> It's not likely to help with this particular problem, but it's generally
> best to start from a position of letting the optimizer know what it's
> really got for resources. An effective cache size of somewhere around
> 30GB would probably be best here.
>
>> Given the nature of the ix_objects_type_lastmodified index, wondering
>> if the index requires rebuilt. I tested rebuilding it in another db,
>> and it came to 2500 pages as opposed to 38640 pages.
>
> That's pretty serious bloat. Any idea how that happened? Have you had
> long running database transaction which might have prevented normal
> maintenance from working? If not, you may need more aggressive settings
> for autovacuum. Anyway, sure, try this with the index rebuilt. If you
> don't want downtime, use CREATE INDEX CONCURRENTLY and then drop the old
> index. (You could then rename the new index to match the old, if
> needed.)
>
>> The puzzle being why the same query with same filters, runs most of
>> times faster but at times runs 5+ mintues and it switches back to fast
>
>> mode.
>
> It is likely either that something has pushed the relevant data out of
> cache before the slow runs, or there is blocking. How big is this
> database? Can you get a list of pg_stat_activity and pg_locks during an
> episode of slow run time?
>
>> If it had used a different execution plan than the above, how do I
>> list all execution plans executed for a given SQL.
>
> It's unlikely that the slow runs are because of a different plan being
> chosen. I was wondering if a better plan might be available, but this
> one looks pretty good with your current indexes. I can think of an
> indexing change or two which *might* cause the optimizer to pick a
> different plan, but that is far from certain, and without knowing the
> cause of the occasional slow runs, it's hard to be sure that the new
> plan wouldn't get stalled for the same reasons.
>
> If it's possible to gather more data during an episode of a slow run,
> particularly the pg_stat_activity and pg_locks lists, run as the
> database superuser, it would help pin down the cause. A vmstat during
> such an episode, to compare to a "normal" one, might also be
> instructive.
>
> -Kevin
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Kevin Grittner 2009-08-05 19:42:30 Re: Query help
Previous Message Subbiah Stalin-XCGF84 2009-08-05 19:16:04 Re: Query help