Re: Re: Query not using index

From: "Richard Huxton" <dev(at)archonet(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: Re: Query not using index
Date: 2001-05-10 23:16:10
Message-ID: 007501c0d9a7$37cfb960$1001a8c0@archonet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

From: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>

> > I had a similar situation, where I had a lot of rows with 0's in
> > them. Changing those to NULLs worked wonders.
>
> Yes, if you have a lot of "dummy" values it's a good idea to represent
> them as NULLs rather than some arbitrarily-chosen regular data value.
> The planner does keep track of NULLs separately from everything else.

Is there a good reason why rdbms don't just keep a cache of decisions on
this stuff. I realise SQL is supposed to be ad-hoc but in reality, it's the
old 90:10 rule where a handful of queries get run consistently and where
performance is important.

Why doesn't PG (or any other system afaik) just have a first guess, run the
query and then if the costs are horribly wrong cache the right result. I'm
guessing there's a bloody good reason (TM) for it since query planning has
got to be equivalent to least-cost path so NP (NP-Complete? I forget - too
long out of college).

- Richard Huxton

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2001-05-10 23:22:36 Re: Slowdown problem when writing 1.7million records
Previous Message Richard Huxton 2001-05-10 23:07:09 Re: Distributed join query ?