Re: 'DROP INDEX' kills stored rpocedures

From: Vlad Krupin <vlad(at)echospace(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: 'DROP INDEX' kills stored rpocedures
Date: 2003-04-03 22:13:44
Message-ID: 3E8CB218.7060006@echospace.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Tom Lane wrote:

>"scott.marlowe" <scott(dot)marlowe(at)ihs(dot)com> writes:
>
>
>>On Thu, 3 Apr 2003, Vlad Krupin wrote:
>>
>>
>>>consider this sequence of commands:
>>>
>>>#1 BEGIN;
>>>#2 BEGIN;
>>>#2 DROP INDEX "bar_idx";
>>>#1 EXPLAIN ANALYZE SELECT * FROM "foo" WHERE "bar"='hello';
>>>
>>>This performs a sequential scan for me, even though I have not committed
>>>the transaction on client #2 yet! If I do not drop the index (no #2
>>>statements), it performes an indexed scan.
>>>
>>>
>
>
>
>>On my 7.2.x box, this results in #1 waiting for #2 to commit. It just
>>pauses #1 indefinitely. Are you running 7.3.x? Might explain the
>>differences.
>>
>>
>
>I overlooked this part of Vlad's message. AFAIK the above should cause #1
>to wait for #2's commit in *any* version of Postgres; certainly anything
>released in the last several years. DROP INDEX will take an exclusive
>lock on the table owning the index, and that will prevent EXPLAIN from
>accessing the table even just to plan a query on it.
>
You are right. I must have assumed the SELECT was taking a long time to
run due to sequential scan, and typed 'COMMIT' in the transaction that
was dropping indexes. That would make the other transaction do a
sequential scan, and that's what I saw. ops... I tried to reporoduce
what I claimed to have seen in the previous email, and was unable to do
so - indeed it does lock the table indefinitely. Sorry for confusion.

That was my mistake, but the problem I was trying to solve still stands.
Based on what Tom said, it's due to plan being cached.
I do something like that:
1. Start a connection
2. Execute stored procedure (it succeeds and caches the plan that
includes OID of an index it used).
3. In a different connection I drop and re-create the index (it gets new
OID)
4. Try to repeat step (2) and get a 'Relation [OID of index dropped]
does not exist'.
- If I close and re-open the connection somewhere between steps 2 and 4,
everything is good.
- Also, it does not seem to affect just regular SELECT queries - only
when they are wrapped in my stored procedure that returns a refcursor. I
do not know why - I imagine the planner would cache those plans too, right?

While closing connections solves the problem, it's not a good fix
because connections are pooled on the client, and I have no control over
them. What would be super-nice is if after doing CREATE INDEX, but
before COMMIT I could do something to tell the planner to discard caches
for that table, including planners that are being used in other
concurrent connections. Is there a way to do that?

An even better way to do it is to tell all planners to discard their
cached plans if the the index they are relying on suddenly disappears.
Otherwise a seemingly innocent operation (DROP INDEX) results not in a
slower execution, but in a stored procedure failure.

Is that possible, or is closing the connection my only way of working
around that problem?

Thanks for your help guys,

Vlad

--
Vlad Krupin
Software Engineer
echospace.com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Alvaro Herrera 2003-04-03 22:15:40 Re: Postgres Syslog
Previous Message Tom Lane 2003-04-03 22:01:38 Re: unable to dump database, toast errors