|From:||Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>|
|To:||Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>|
|Cc:||Robert Haas <robertmhaas(at)gmail(dot)com>, 曾文旌(义从) <wenjing(dot)zwj(at)alibaba-inc(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, 蔡松露(子嘉) <zijia(at)taobao(dot)com>, "Cai, Le" <le(dot)cai(at)alibaba-inc(dot)com>, 张广舟(明虚) <guangzhou(dot)zgz(at)alibaba-inc(dot)com>, 赵殿奎 <diankui(dot)zdk(at)alibaba-inc(dot)com>, 萧少聪(铁庵) <shaocong(dot)xsc(at)alibaba-inc(dot)com>|
|Subject:||Re: [Proposal] Global temporary tables|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On 25.10.2019 20:00, Pavel Stehule wrote:
> >> So except the limitation mentioned above (which I do not
> consider as critical) there is only one problem which was not
> addressed: maintaining statistics for GTT.
> >> If all of the following conditions are true:
> >> 1) GTT are used in joins
> >> 2) There are indexes defined for GTT
> >> 3) Size and histogram of GTT in different backends can
> significantly vary.
> >> 4) ANALYZE was explicitly called for GTT
> >> then query execution plan built in one backend will be also
> used for other backends where it can be inefficient.
> >> I also do not consider this problem as "show stopper" for
> adding GTT to Postgres.
> > I think that's *definitely* a show stopper.
> Well, if both you and Pavel think that it is really "show
> stopper", then
> this problem really has to be addressed.
> I slightly confused about this opinion, because Pavel has told me
> himself that 99% of users never create indexes for temp tables
> or run "analyze" for them. And without it, this problem is not a
> at all.
> Users doesn't do ANALYZE on temp tables in 99%. It's true. But second
> fact is so users has lot of problems. It's very similar to wrong
> statistics on persistent tables. When data are small, then it is not
> problem for users, although from my perspective it's not optimal. When
> data are not small, then the problem can be brutal. Temporary tables
> are not a exception. And users and developers are people - we know
> only about fatal problems. There are lot of unoptimized queries, but
> because the problem is not fatal, then it is not reason for report it.
> And lot of people has not any idea how fast the databases can be. The
> knowledges of users and app developers are sad book.
It seems to me that I have found quite elegant solution for per-backend
statistic for GTT: I just inserting it in backend's catalog cache, but
not in pg_statistic table itself.
To do it I have to add InsertSysCache/InsertCatCache functions which
insert pinned entry in the correspondent cache.
I wonder if there are some pitfalls of such approach?
New patch for GTT is attached.
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
|Next Message||Peter Geoghegan||2019-11-01 15:22:27||Re: Remove configure --disable-float4-byval and --disable-float8-byval|
|Previous Message||Robert Haas||2019-11-01 15:01:15||Re: ssl passphrase callback|