From: | David Waller <dwaller(at)yammer-inc(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "pgsql-bugs(at)postgresql(dot)org" <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: BUG #14237: Terrible performance after accidentally running 'drop index' for index still being created |
Date: | 2016-07-13 10:42:09 |
Message-ID: | 906C6061-69F6-44B9-9CC9-40281E3F27D6@microsoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On 11/07/2016, 18:08, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> David Waller <dwaller(at)yammer-inc(dot)com> writes:
> > Thank you for the detailed explanation. This all seems very sensible, and
> > reasonable behaviour from Postgres. Yet... it still 'allowed' me to shoot myself
> > painfully in the foot. User error, I agree, yet people make mistakes - could
> > Postgres behave more gracefully?
>
> Well, there are always tradeoffs. You could choose to run with a
> non-infinite setting of lock_timeout, which would have caused the DROP to
> fail after waiting a second or two (or whatever you set the timeout to
> be). That would move the denial of service over to the problematic DDL,
> which might be a good tradeoff for your environment. But not everybody is
> going to think that query failure is a "more graceful" solution.
Thank you! lock_timeout sounds like exactly what I need to set - thank you for
helping out.
Now I know about lock_timeout I agree, there's nothing worth changing here.
Postgres already has the tools built in to allow me to get the behaviour I
wanted in this sitution.
Thank you for your patient explanations.
David
From | Date | Subject | |
---|---|---|---|
Next Message | Tomasz Ostrowski | 2016-07-13 11:10:45 | Invalid indexes should not consume update overhead |
Previous Message | amal | 2016-07-13 08:29:46 | BUG #14246: Postgres crashing frequently |