> I want to try to add a multi calendar system for pgsql. I want to know if it will be accepted as a patch to pgsql?
> More details:
> Multi calendar systems are useful for several languages and countries using different calendar: Hijri, Persian, Hebrew,
> For implementation I think it is better to add this support in date fields like this:
> create table tb (dt date calendar persian);
> if no calendar is defined, it will be Gregorian so no problem is made for old sql commands.
I don't think that new keywords should be added for that if it does not
belong to the SQL standard, especially with something as open ended and
sensitive as a cultural related keyword: there are dozens calendars listed
ISTM that this is either a localization problem, possibly fully
independent from pg, or a conversion issue with a simple function which
may be develop as an extension outside pg, say:
>From pgsql-hackers-owner(at)postgresql(dot)org Wed Feb 18 13:08:59 2009
Received: from localhost (unknown [18.104.22.168])
by mail.postgresql.org (Postfix) with ESMTP id 8C7A9633167
for <pgsql-hackers-postgresql(dot)org(at)mail(dot)postgresql(dot)org>; Wed, 18 Feb 2009 13:08:59 -0400 (AST)
Received: from mail.postgresql.org ([22.214.171.124])
by localhost (mx1.hub.org [126.96.36.199]) (amavisd-maia, port 10024)
with ESMTP id 10027-02
Wed, 18 Feb 2009 13:08:52 -0400 (AST)
X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6
Received: from el-out-1112.google.com (el-out-1112.google.com [188.8.131.52])
by mail.postgresql.org (Postfix) with ESMTP id 7E236632798
for <pgsql-hackers(at)postgresql(dot)org>; Wed, 18 Feb 2009 13:08:51 -0400 (AST)
Received: by el-out-1112.google.com with SMTP id y26so2447045ele.10
for <pgsql-hackers(at)postgresql(dot)org>; Wed, 18 Feb 2009 09:08:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
DomainKey-Signature: a=rsa-sha1; c=nofws;
Received: by 10.150.149.19 with SMTP id w19mr40594ybd.201.1234976928589; Wed,
18 Feb 2009 09:08:48 -0800 (PST)
Date: Wed, 18 Feb 2009 12:08:48 -0500
Subject: Re: The science of optimization in practical terms?
From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Sam Mason <sam(at)samason(dot)me(dot)uk>, pgsql-hackers(at)postgresql(dot)org
Content-Type: text/plain; charset=ISO-8859-1
X-Virus-Scanned: Maia Mailguard 1.0.1
X-Spam-Status: No, hits=0 tagged_above=0 required=5 tests=none
On Wed, Feb 18, 2009 at 11:46 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> Yeah, I thought about this too, but it seems like overkill for the
>> problem at hand, and as you say it's not clear you'd get any benefit
>> out of the upper bound anyway. I was thinking of something simpler:
>> instead of directly multiplying 0.005 into the selectivity every time
>> you find something incomprehensible, keep a count of the number of
>> incomprehensible things you saw and at the end multiply by 0.005/N.
>> That way more unknown quals look more restrictive than fewer, but
>> things only get linearly wacky instead of exponentially wacky.
> clauselist_selectivity could perhaps apply such a heuristic, although
> I'm not sure how it could recognize "default" estimates from the various
> specific estimators, since they're mostly all different.
Presumably the estimators would need to be modified to provide some
information on their level of confidence in their estimate (possibly
this could be more general than whether the number is a default or
not, though I'm not sure what we'd do with that information). But it
may not be necessary to go through that pain if we implement your idea
> Personally I've not seen all that many practical cases where the
> estimator simply hasn't got a clue at all. What's far more commonly
> complained of IME is failure to handle *correlated* conditions in
> an accurate fashion. Maybe we should just discount the product
> selectivity all the time, not only when we think the components are
> default estimates.
That has something going for it, although off the top of my head I'm
not sure exactly what formula would make sense. Presumably we want
the overall selectivity estimate to be less than the estimate for
individual clause taken individually, but how much less? It doesn't
seem right to estimate the selectivity of S_1...S_n as MIN(S_1 ...
S_n) / n, because that will give you weird results with things like a
= 1 AND a != 2. You might need to divide the estimates into two
buckets: those that reduce selectivity by a lot, and those that reduce
it only slightly, then multiply the latter bucket and, say, divide
through by the cardinality of the former bucket. But the exact
details of the math are not obvious to me.
I'm talking off the top of my head here, maybe you have a more clear
thought as to how this would work?
In response to
pgsql-hackers by date
|Next:||From: Bruce Momjian||Date: 2009-02-18 17:12:20|
|Subject: Re: pg_migrator progress|
|Previous:||From: Tom Lane||Date: 2009-02-18 16:46:41|
|Subject: Re: The science of optimization in practical terms? |