| From: | Vojtěch Rylko <vojta(dot)rylko(at)seznam(dot)cz> |
|---|---|
| To: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: Drop big index |
| Date: | 2012-02-16 14:56:13 |
| Message-ID: | 4F3D190D.9030103@seznam.cz |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Dne 16.2.2012 9:53, Marti Raudsepp napsal(a):
> 2012/2/15 Vojtěch Rylko<vojta(dot)rylko(at)seznam(dot)cz>:
>> this query performed so long and blocked table so I had to interrupt it. Is
>> there any way how to drop large indexes in non-blocking or /faster/ way?
> Usually the problem is not with the size of the index -- but some
> other running transactions that hold a read lock on the table, and
> preventing the DROP INDEX from getting an exclusive lock. If the
> exclusive lock is granted, the drop index is usually very fast.
>
> Run 'select * from pg_stat_activity' and see if there are any "<IDLE>
> in transaction" connections. It's normal to have these for a second or
> few, but longer idle transactions usually indicate an application bug
> -- it started a transaction, but "forgot" to rollback or commit. These
> are problematic for this exact reason -- locks can't be released until
> the transaction finishes.
>
> Regards,
> Marti
>
Thanks! Caused by "IDLE in transaction". My nightmare solved. Droping 7
GB index in 2353 ms.
Regards,
Vojtěch R.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | ChoonSoo Park | 2012-02-16 15:48:58 | How to dereference 2 dimensional array? |
| Previous Message | Adrian Klaver | 2012-02-16 14:56:07 | Re: Dynamic update of a date field |