Re: Invalid indexes should not consume update overhead

From: Tomasz Ostrowski <tometzky+pg(at)ato(dot)waw(dot)pl>
To: PostgreSQL Bugs <pgsql-bugs(at)postgresql(dot)org>
Cc: Greg Stark <stark(at)mit(dot)edu>
Subject: Re: Invalid indexes should not consume update overhead
Date: 2016-07-17 11:41:52
Message-ID: 578B6F00.3080104@ato.waw.pl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

On 2016-07-17 02:09, Greg Stark wrote:
> The real solution imho is to actually clean up failed index builds when
> a build fails.

That wouldn't solve my problem, which is that I need a way to disable
indexes before large update. I believe (but I'm not sure) that Oracle
has this concept:
ALTER INDEX [INDEX_NAME] UNUSABLE;

Maybe, if an index is in invalid state, update can check which part of
table is already indexed and which part is not. Then it would only
update indexes of this already reindexed part of table. This way
purposely invalid indexes could be marked valid for blocks numbers less
than 0.

This might actually be a win during concurrent index creation as
concurrent updates would not have to update index for all updated rows.

But I don't know if it's feasible from concurrency perspective at all.

Regards,
Tomasz "Tometzky" Ostrowski

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Peter Geoghegan 2016-07-17 19:13:06 Re: Invalid indexes should not consume update overhead
Previous Message Michael Paquier 2016-07-17 09:52:59 Re: BUG #14254: The postgres service goes down while working with the application