Re: anti-join chosen even when slower than old plan

From: Bob Lunney <bob_lunney(at)yahoo(dot)com>
To: Kenneth Marshall <ktm(at)rice(dot)edu>, mladen(dot)gogala(at)vmsinfo(dot)com
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: anti-join chosen even when slower than old plan
Date: 2010-11-11 17:05:56
Message-ID: 926836.80178.qm@web39707.mail.mud.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


--- On Thu, 11/11/10, Mladen Gogala <mladen(dot)gogala(at)vmsinfo(dot)com> wrote:

> From: Mladen Gogala <mladen(dot)gogala(at)vmsinfo(dot)com>
> Subject: Re: [PERFORM] anti-join chosen even when slower than old plan
> To: "Kenneth Marshall" <ktm(at)rice(dot)edu>
> Cc: "Robert Haas" <robertmhaas(at)gmail(dot)com>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
> Date: Thursday, November 11, 2010, 9:15 AM
> Kenneth Marshall wrote:
> > I agree with the goal of avoiding the need for a GUC.
> This needs to
> > be as automatic as possible. One idea I had had was
> computing a value
> > for the amount of cache data in the system by keeping
> a sum or a
> > weighted sum of the table usage in the system. Smaller
> tables and
> > indexes would contribute a smaller amount to the
> total, while larger
> > indexes and tables would contribute a larger amount.
> Then by comparing
> > this running total to the effective_cache_size, set
> the random and
> > sequential costs for a query. This would allow the
> case of many 4MB
> > tables to favor disk I/O more than memory I/O. The
> weighting could
> > be a function of simultaneous users of the table. I
> know this is a
> > bit of hand-waving but some sort of dynamic feedback
> needs to be
> > provided to the planning process as system use
> increases.
> >
> > Regards,
> > Ken
> >
> >   
> Kenneth, you seem to be only concerned with the accuracy of
> the planning process, not with the plan stability. As a DBA
> who has to monitor real world applications, I find things
> like an execution plan changing with the use of the system
> to be my worst nightmare. The part where you say that "this
> needs to be as automatic as possible" probably means that I
> will not be able to do anything about it, if the optimizer,
> by any chance, doesn't get it right. That looks to me like
> an entirely wrong way to go.
> When application developer tunes the SQL both him and me
> expect that SQL to always perform that way, not to change
> the execution plan because the system is utilized more than
> it was 1 hour ago. Nobody seems to have taken my suggestion
> about having a parameter
> which would simply "invent" the percentage out of thin air
> seriously, because it's obviously not accurate.
> However, the planner accuracy is not the only concern.
> Running applications on the system usually requires plan
> stability. Means of
> external control of the execution plan, DBA knobs and
> buttons that can be turned and pushed to produce the desired
> plan are also very much desired.
>
> -- Mladen Gogala Sr. Oracle DBA
> 1500 Broadway
> New York, NY 10036
> (212) 329-5251
> www.vmsinfo.com
>

Mladen,

Been there, done that with Oracle for more years than I care to remember or admit. Having the necessary knobs was both daunting and a godsend, depending on if you could find the right one(s) to frob during production use, and you turned them the right way and amount. I personally find having less knobbage with PostgreSQL to be a huge benefit over Oracle. In that spirit, I offer the following suggestion: (Ken's original suggestion inspired me, so if I misunderstand it, Ken, please correct me.)

What if the code that managed the shared buffer cache kept track of how many buffers were in the cache for each table and index? Then the optimizer could know the ratio of cached to non-cached table of index buffers (how many pages are in PG's buffer cache vs. the total number of pages required for the entire table, assuming autovacuum is working well) and plan accordingly. It would even be possible to skew the estimate based on the ratio of shared_buffers to effective_cache_size. The optimizer could then dynamically aadjust the random and sequential costs per query prior to planning, with (hopefully) plans optimized to the current condition of the server and host caches just prior to execution.

There are lots of assumptions here, the primary ones being the shared buffer cache's state doesn't change significantly between the start of planning and actual execution time, and the host is dedicated to running the database and nothing else that would trash the host's file system cache. I admit that I haven't looked at the code for this yet, so I don't know if I'm on to something or off in the weeds.

Regards,

Bob Lunney

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Mladen Gogala 2010-11-11 17:13:08 Re: anti-join chosen even when slower than old plan
Previous Message Kevin Grittner 2010-11-11 15:00:20 Re: anti-join chosen even when slower than old plan