From: | "Tao Ma" <feng_eden(at)163(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | huge query tree cost too much time to copyObject() |
Date: | 2008-11-24 07:57:20 |
Message-ID: | ggdmp3$2i0a$1@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
recently, I wrote a really complex SELECT statement which consists of about
20 relations using NATURAL JOIN method and every single relation consists 50
columns. It looks like:
PREPARE ugly_stmt AS
SELECT * FROM t1 NATURAL JOIN t2 NATURAL JOIN t3 ... NATURAL JOIN t20 WHERE
id = $1;
All tables share only one common column "id" which is also defined as primay
key.
I set join_collapse_limit to 1 and just write a prepare statement for
calling multi-times.
It seems Postgres cost lots of time to copyObject(). So can I just allocate
a new context from TopMemoryContext before doing QueryRewrite() and
pg_plan_queries(), and save them into hash table without copying query_list
and plan_list again(i think they are lived in the context I created). I know
I am subjected a long term memory leak until I deallocate the prepared
statement, but it save lots of time in my situation. And I can bear with
it...
Thanks in advance
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2008-11-24 08:05:06 | Re: Visibility map, partial vacuums |
Previous Message | Hitoshi Harada | 2008-11-24 07:48:55 | Re: Windowing Function Patch Review -> Standard Conformance |